AWS file upload more than 5mb - nodejs - angularjs

I am using this module to upload file to amazon
https://www.npmjs.com/package/streaming-s3, which is working fine if file is less or equal than 5 MB.
I tried to upload PDF file with size 6 MB. it shows upload successfully, but when i tried to open that file through aws.
it shows Failed to load PDF document
When i checked size on Aws it shows 5 MB.
I am using following code to upload on AWS
var options = {
concurrentParts: 2,
waitTime: 20000,
retries: 2,
maxPartSize: 10 * 1024 * 1024
};
//call stream function to upload the file to s3
var uploader = new streamingS3(fileReadStream, config.aws.accessKey, config.aws.secretKey, awsHeader, options);
//start uploading
uploader.begin();// important if callback not provided.
// handle these functions
uploader.on('data', function (bytesRead) {
console.log(bytesRead, ' bytes read.');
});
uploader.on('part', function (number) {
console.log('Part ', number, ' uploaded.');
});
// All parts uploaded, but upload not yet acknowledged.
uploader.on('uploaded', function (stats) {
console.log('Upload stats: ', stats);
});
uploader.on('finished', function (response, stats) {
console.log(response);
logger.log('info', "UPLOAD ", response);
cb(null, response);
});
uploader.on('error', function (err) {
console.log('Upload error: ', err);
logger.log('error', "UPLOAD Error: ", err);
cb(err);
});
which is working fine for less than 5 MB files.
Any idea? Is there is any settings which i need to do on AWS ?
Thanks

This is desired feature for piping the content to S3 via the multipart file upload API, you can keep memory usage low even when operating on a stream that is GB in size. This stream avoids high memory usage by flushing the stream to S3 in 5 MB parts such that it should only ever store 5 MB of the stream data at a time.
The problem that we are facing here is that the next part is not adding up to the stream.
Refer this link for end to end details
https://www.npmjs.com/package/s3-upload-stream
you can also track the upload progress, to debug the issue using
/* Handle progress. Example details object:
{ ETag: '"f9ef956c83756a80ad62f54ae5e7d34b"',
PartNumber: 5,
receivedSize: 29671068,
uploadedSize: 29671068 }
*/
upload.on('part', function (details) {
console.log(details);
});
Even on complete file upload done.
upload.on('uploaded', function (details) {
console.log(details);
});

Related

Corrupt video uploads when chunking MediaRecorder to Google Cloud platform

I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.

DOMException: The requested file could not be read, typically due to permission problems that have occurred after a reference to a file was acquired

when i trying to read file using FileReader and the file size is 5.9gb and when this code run
var file = document.getElementById('uploadFileId').files[0];
let reader = new FileReader();
reader.onerror = function() {
console.log(reader.error);
}
reader.onload = function(e) {
console.log(" e.target.result ",e.target.result);
}
reader.readAsArrayBuffer(file);
then above error is generate in angularjs.
here i want to achieve that multipart file want to divide in to 5mbs chunks and send to server.
I'm getting the same message, but only for files over 2GB. Seems as though there is a file size limit that triggers this unhelpful message.
This seems related to the Chrome 2GB ArrayBuffer size limit (other browsers have higher limits).
One solution is to upload the file chunks and then save them all to a file on the server:
const writableStream = new WritableStream({
start(controller) { },
async write(chunk, controller) {
console.log(chunk);
// upload the chunks here
},
close() { },
abort(reason) { },
});
const stream = e.target.files[0].stream();
stream.pipeTo(writableStream);
This a client-side issue and can happen when the browser doesn't have access to read contents of files in a folder (e.g. being run with different user credentials).
Adjusting permissions or copying the file to a location with less restrictions before uploading can solve this.

Downloading an Excel file causes it to corrupt

I have a simple service on Angular 2 and Typescript that requests Excel files to a server and then opens a download file dialogue for the user. However, as it is currently, the file becomes corrupt when downloaded.
When downloaded, it opens fine in OpenOffice and derivates, but throws a "File is Corrupt" error on Microsoft Excel, and asks if the user wants to recover as much as it can.
When Excel is prompted to recover the file, it does so successfully, and the recovered Excel has all rows and data that is expected for the Excel file. Comparing the recovered file against opening the file in OpenOffice and derivates evidence no outstanding differences.
The concrete Excel I am trying to download is generated with Apache POI in a microservice, then passed to the main backend and finally served to the frontend for the user to download. Both the backend and microservice are written in Java, through Spark Framework.
I made some tests on the backends, and concluded the problem is not the report generation nor the data transfer:
Asking the microservice to save the generated Excel in a file within the server and then opening such file (hereby file A) in Excel shows that file A is not corrupted.
Asking the main backend server to save the Excel file that it receives from the microservice in a file within itself and then opening such file in Excel (hereby file B) shows that file B is not corrupted.
Downloading both file A and file B through FileZilla from their respective servers yields completely uncorrupted files.
As such, I believe it is safe to assume the Excel becomes corrupted somewhere between the time the file is received on the frontend and the time the user downloads such file. Additionally, the Catalina logs do not evidence any error that might potentially be happening.
I have read several posts that deal with the issue, including a bug report (https://github.com/angular/angular/issues/14083) that included a workaround via XMLHTTPRequest. However, none of the workarounds detailed were successful in solving my issue.
Attached is the code I am using to both obtain the Excel file from the backend and serve it to the user. I am including both an XMLHTTPRequest and an Angular http call (within comments) since those are the two main ways I have been trying to make this work. Additionally, please do take into account the code has been altered to remove information I do not wish to make public.
download(body) {
let reply = Observable.create(observer => {
let xhr = new XMLHttpRequest();
xhr.open('POST', 'URL', true);
xhr.setRequestHeader('Content-type', 'application/json;charset=UTF-8');
xhr.setRequestHeader('Accept', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
xhr.setRequestHeader('Authorization', 'REDACTED');
xhr.responseType = 'blob';
xhr.onreadystatechange = function () {
if(xhr.readyState === 4) {
if(xhr.status === 200) {
var contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet';
var blob = new Blob([xhr.response], { type: contentType });
observer.next(blob);
observer.complete();
}
else {
observer.error(xhr.response);
}
}
}
xhr.send(JSON.stringify(body));
});
return reply;
/*let headers = new Headers();
headers.set("Authorization", 'REDACTED');
headers.set("Accept", 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
let requestOptions :RequestOptions = new RequestOptions({headers: headers, responseType: ResponseContentType.Blob});
return this.http.post('URL', body, requestOptions);*/
}
Hereby is the code to prompt the user to download the Excel. It is currently made to work with the XMLHTTPRequest. Please do note that I have also attempted to download without resorting to FileSaver, with no luck.
downloadExcel(data) {
let body = {
/*REDACTED*/
}
this.service.download(body)
.subscribe(data => {
FileSaver.saveAs(data, "Excel.xlsx");
});
}
Hereby are the versions of the tools I am using:
NPM: 5.6.0
NodeJs: 8.11.3
Angular JS: ^6.1.0
Browsers used: Chrome, Firefox, Edge.
Any help on this issue would be appreciated. Any additional information you may need I will be happy to provide.
I think what you want is CSV format which open in Excel, update your sevice as follow:
You should tell Angular you are expecting a response of type blob (Binary Large Object) that is your Excel/Csv file.
Also make sure the URL/API on your server is set to accept content-type='text/csv'.
Here's an example with Angular 2.
#Injectable()
export class YourService {
constructor(private http: Http) {}
download() { //get file from the server
this.http.get("http://localhost/..", {
responseType: ResponseContentType.Blob,
headers: new Headers({'Content-Type', 'text/csv'})
}).subscribe(
response => {
var blob = new Blob([response.blob()], {type: 'text/csv'});
FileSaver.saveAs(blob, 'yourFileName.csv');
},
error => {
console.error('something went wrong');
}
);
}
}
Have you tried uploading/downloading your xls file as base64?
var encodedXLSToUpload = 'data:application/xls;base64,' + btoa(file);
Check this for more details: Creating a Blob from a base64 string in JavaScript

create on fly zip file for download through node.js

I simply need to achieve below setup with node js script (generate the zip on the fly without ever touching disk and respond back to client to download). Can someone guide and post your working scripts. I tried googling, seems we can achieve it through zipstream. But didn't find any example/working script.
grab the files matching *.xml files from root folder.
Immediately writes to the client’s http response the http headers to say it’s a download and the file name is .zip.
zipstream writes the header bytes of zip container.
Creates an http request to the first image in S3.
Pipes that into zipstream (we don’t actually need to run deflate as the images are already compressed).
Pipes that into the client’s http response.
Repeats for each image, with zipstream correctly writing envelopes for each file.
zipstream writes the footer bytes for the zip container
Ends the http response.
Thanks,
Srinivas
I had the same requirement ... stream files from Amazon S3, zip them on the fly (in memory) and deliver to the browser through node.js. My solution involved using the knox and archiver packages and piping the archive's bytes to the result stream.
Since this is on the fly, you wont know the resulting archive size and therefore you cannot use the "Content-Length" HTTP header. Instead you'll have to use the "Transfer-Encoding: chunked" header.
The downside to "chunked" is you won't get a progress bar for the download. I've tried setting the Content-Length header to an approximate value, but this only works for Chrome and Firefox; IE corrupts the file; haven't tested with Safari.
var http = require("http");
var knox = require("knox");
var archiver = require('archiver');
http.createServer(options, function(req, res) {
var zippedFilename = 'test.zip';
var archive = archiver('zip');
var header = {
"Content-Type": "application/x-zip",
"Pragma": "public",
"Expires": "0",
"Cache-Control": "private, must-revalidate, post-check=0, pre-check=0",
"Content-disposition": 'attachment; filename="' + zippedFilename + '"',
"Transfer-Encoding": "chunked",
"Content-Transfer-Encoding": "binary"
};
res.writeHead(200, header);
archive.store = true; // don't compress the archive
archive.pipe(res);
client.list({ prefix: 'myfiles' }, function(err, data) {
if (data.Contents) {
var fileCounter = 0;
data.Contents.forEach(function(element) {
var fileName = element.Key;
fileCounter++;
client.get(element.Key).on('response', function(awsData) {
archive.append(awsData, {name: fileName});
awsData.on('end', function () {
fileCounter--;
if (fileCounter < 1) {
archive.finalize();
}
});
}).end();
});
archive.on('error', function (err) {
throw err;
});
archive.on('finish', function (err) {
return res.end();
});
}
}).end();
}).listen(80, '127.0.0.1');

Upload file bigger than 40MB to Google App Engine?

I am creating a Google App Engine web app to "transform" files of 10K~50M
Scenario:
User opens http://fixdeck.appspot.com in web browser
User clicks on "Browse", select file, submits
Servlet loads file as an InputStream
Servlet transforms file
Servlet saves file as an OutputStream
The user's browser receives the transformed file and asks where to save it, directly as a response to the request in step 2
(For now I did not implement step 4, the servlet sends the file back without transforming it.)
Problem: It works for 15MB files but not for a 40MB file, saying: "Error: Request Entity Too Large. Your client issued a request that was too large."
Is there any workaround against this?
Source code: https://github.com/nicolas-raoul/transdeck
Rationale: http://code.google.com/p/ankidroid/issues/detail?id=697
GAE has a hard limits of 32MB for HTTP requests and HTTP responses. That will limit the size of uploads/downloads directly to/from a GAE app.
Revised Answer (Using Blobstore API.)
Google provides to the Blobstore API for handling larger files in GAE (up to 2GB). The overview documentation provides complete sample code. Your web form will upload the file to blobstore. The blobstore API then rewrites the POST back to your servlet where you can do your transformation and save the transformed data back in to the blobstore (as a new blob).
Original Answer (Didn't Consider Blobstore as an option.)
For downloading, I think GAE only workaround would be to break the file up in to multiple parts on the server, and then reassemble after downloading. That's probably not doable using a straight browser implementation though.
(As an alternative design, perhaps you could send the transformed file from GAE to an external download location (such as S3) where it could be downloaded by the browser without the GAE limit restrictions. I don't believe GAE initiated connections have same request/response size limitations, but I'm not positive. Regardless, you would still be restricted by the 30 second maximum request time. To get around that, you'd have to look in to GAE Backend instances and come up with some sort of asynchronous download strategy.)
For uploading larger files, I've read about the possibility of using HTML5 File APIs to slice the file in to multiple chunks for uploading, and then reconstructing on the server. Example: http://www.html5rocks.com/en/tutorials/file/dndfiles/#toc-slicing-files . However, I don't how practical a solution that really is due to changing specifications and browser capabilities.
You can use the blobstore to upload files as large as 2 gigabytes.
When uploading larger files, you can consider the file to be chunked into small sets of requests (should be less than 32MB which is the current limit) that Google App Engine supports.
Check this package with examples - https://github.com/pionl/laravel-chunk-upload
Following is a working code which uses the above package.
View
<div id="resumable-drop" style="display: none">
<p><button id="resumable-browse" class="btn btn-outline-primary" data-url="{{route('AddAttachments', Crypt::encrypt($rpt->DRAFT_ID))}}" style="width: 100%;
height: 91px;">Browse Report File..</button>
</div>
Javascript
<script>
var $fileUpload = $('#resumable-browse');
var $fileUploadDrop = $('#resumable-drop');
var $uploadList = $("#file-upload-list");
if ($fileUpload.length > 0 && $fileUploadDrop.length > 0) {
var resumable = new Resumable({
// Use chunk size that is smaller than your maximum limit due a resumable issue
// https://github.com/23/resumable.js/issues/51
chunkSize: 1 * 1024 * 1024, // 1MB
simultaneousUploads: 3,
testChunks: false,
throttleProgressCallbacks: 1,
// Get the url from data-url tag
target: $fileUpload.data('url'),
// Append token to the request - required for web routes
query:{_token : $('input[name=_token]').val()}
});
// Resumable.js isn't supported, fall back on a different method
if (!resumable.support) {
$('#resumable-error').show();
} else {
// Show a place for dropping/selecting files
$fileUploadDrop.show();
resumable.assignDrop($fileUpload[0]);
resumable.assignBrowse($fileUploadDrop[0]);
// Handle file add event
resumable.on('fileAdded', function (file) {
$("#resumable-browse").hide();
// Show progress pabr
$uploadList.show();
// Show pause, hide resume
$('.resumable-progress .progress-resume-link').hide();
$('.resumable-progress .progress-pause-link').show();
// Add the file to the list
$uploadList.append('<li class="resumable-file-' + file.uniqueIdentifier + '">Uploading <span class="resumable-file-name"></span> <span class="resumable-file-progress"></span>');
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-name').html(file.fileName);
// Actually start the upload
resumable.upload();
});
resumable.on('fileSuccess', function (file, message) {
// Reflect that the file upload has completed
location.reload();
});
resumable.on('fileError', function (file, message) {
$("#resumable-browse").show();
// Reflect that the file upload has resulted in error
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-progress').html('(file could not be uploaded: ' + message + ')');
});
resumable.on('fileProgress', function (file) {
// Handle progress for both the file and the overall upload
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-progress').html(Math.floor(file.progress() * 100) + '%');
$('.progress-bar').css({width: Math.floor(resumable.progress() * 100) + '%'});
});
}
}
</script>
Controller
public function uploadAttachmentAsChunck(Request $request, $id) {
// create the file receiver
$receiver = new FileReceiver("file", $request, HandlerFactory::classFromRequest($request));
// check if the upload is success, throw exception or return response you need
if ($receiver->isUploaded() === false) {
throw new UploadMissingFileException();
}
// receive the file
$save = $receiver->receive();
// check if the upload has finished (in chunk mode it will send smaller files)
if ($save->isFinished()) {
// save the file and return any response you need, current example uses `move` function. If you are
// not using move, you need to manually delete the file by unlink($save->getFile()->getPathname())
$file = $save->getFile();
$fileName = $this->createFilename($file);
// Group files by mime type
$mime = str_replace('/', '-', $file->getMimeType());
// Group files by the date (week
$dateFolder = date("Y-m-W");
$disk = Storage::disk('gcs');
$gurl = $disk->put($fileName, $file);
$draft = DB::table('draft')->where('DRAFT_ID','=', Crypt::decrypt($id))->get()->first();
$prvAttachments = DB::table('attachments')->where('ATTACHMENT_ID','=', $draft->ATT_ID)->get();
$seqId = sizeof($prvAttachments) + 1;
//Save Submission Info
DB::table('attachments')->insert(
[ 'ATTACHMENT_ID' => $draft->ATT_ID,
'SEQ_ID' => $seqId,
'ATT_TITLE' => $fileName,
'ATT_DESCRIPTION' => $fileName,
'ATT_FILE' => $gurl
]
);
return response()->json([
'path' => 'gc',
'name' => $fileName,
'mime_type' => $mime,
'ff' => $gurl
]);
}
// we are in chunk mode, lets send the current progress
/** #var AbstractHandler $handler */
$handler = $save->handler();
return response()->json([
"done" => $handler->getPercentageDone(),
]);
}
/**
* Create unique filename for uploaded file
* #param UploadedFile $file
* #return string
*/
protected function createFilename(UploadedFile $file)
{
$extension = $file->getClientOriginalExtension();
$filename = str_replace(".".$extension, "", $file->getClientOriginalName()); // Filename without extension
// Add timestamp hash to name of the file
$filename .= "_" . md5(time()) . "." . $extension;
return $filename;
}
You can also use blobstore api to directly upload to cloud storage. Blow is the link
https://cloud.google.com/appengine/docs/python/blobstore/#Python_Using_the_Blobstore_API_with_Google_Cloud_Storage
upload_url = blobstore.create_upload_url(
'/upload_handler',
gs‌​_bucket_name = YOUR.BUCKET_NAME)
template_values = { 'upload_url': upload_url }
_jinjaEnvironment = jinjaEnvironment.JinjaClass.getJinjaEnvironemtVariable()
if _jinjaEnvironment:
template = _jinjaEnvironment.get_template('import.html')
Then in index.html:
<form action="{{ upload_url }}"
method="POST"
enctype="multipart/form-data">
Upload File:
<input type="file" name="file">
</form>

Resources