PhoneGap photo to Google App Engine - google-app-engine

I am trying to upload a photo jpg from a PhoneGap app (javascript) to Google App Engine (php), store parameters in a db, and photo to Google Cloud Storage. All works except the photo file transfer.
The upload function I'm using is typical of PhoneGap's file transfer example http://docs.phonegap.com/en/edge/cordova_file_file.md.html#FileTransfer.
function uploadPhoto(imageURI) {
// imageURI is photo local url file:///Users/me/Library/...etc ... .jpg
// from navigator.camera.getPicture function
// prepare post variables
var options = new FileUploadOptions();
options.fileKey="file";
options.fileName=imageURI.substr(imageURI.lastIndexOf('/')+1);
options.mimeType="image/jpeg";
var params = new Object();
params.foo = "foo";
options.params = params;
options.chunkedMode = false;
// upload image and option
var ft = new FileTransfer();
ft.upload(imageURI, 'http://myapphere.appspot.com/php/myphoto.php', function(r){
// do stuff with r.response
},function(error){
// error
},options,true);
}
On the Google App Engine server, the parameter variables pass fine - what doesn't seem to transfer it the $_FILE file.
/* php/myphoto.php */
// option variables are passed - this works
$foo = $_POST["foo"];
// but it seems the $_FILES is empty?
$gs_name = $_FILES["file"]["tmp_name"]; // <-- not transferring?
$fileName = 'test.jpg';
$moveResult = move_uploaded_file($gs_name, "gs://mybucket/".$fileName);
A file test.jpg is stored in mybucket (a small blank binary/octet). As a test, I created an HTML form on GAE to upload a file image and move_upload_file $_FILE to mybucket, it works. It's the transfer of $_FILES from the javascript app that I can't figure out. (I'm aware of the Google Cloud Storage JSON API objects.insert, but here I'd like to go from phonegap html/javascript to Google App Engine PHP page to process passed data).

Related

Download files from firebase storage using ReactJs

Succesfully i have made to Upload files into firebase storage, but now i want to display all files in table and to have option to download each file.I've read the documentation in firebase but it won't work.When i click the button which function is to get all files and the i want to visualize them in table which users can see:
Show file function:
showFileUrl(){
storageRef.child('UploadedFiles/').listAll().then(function(res) {
res.items.forEach(function(folderRef) {
console.log("folderRef",folderRef.toString());
var blob = null;
var xhr = new XMLHttpRequest();
xhr.open("GET", "downloadURL");
xhr.responseType = "blob";
xhr.onload = function()
{
blob = xhr.response;//xhr.response is now a blob object
console.log(blob);
}
xhr.send();
});
}).catch(function(error) {
});
}
This is log of the network which i found when debugging.What i need to do to get all data and visualize it in table and to hava a download button and when is pressed to download the file
Network log:
Storage in firebase:
Blob object of the files:
Your code gets a list of all the files, but it doesn't actually to anything to read the data for each file.
When using the Web client SDK, the only way to get the data for a file is through a download URL as shown here. So you'll need to:
Loop through all the files you get back from listAll() (you're already doing this).
Call `getDownloadURL as shown here, to get a download URL for each file.
Then use another library/function (such as fetch()/XMLHTTPRequest) to read the data for each file.
Alternatively, if your files are images, you can stuff the download URL in an img tag as the preview.

Saving image file on filesystem on Google App Engine with Python Flask

I have a Flask app where a user can upload an image and the image is saved on a static folder on the filesystem.
Currently, I'm using Google App Engine for hosting and found that it's not possible to save to the static folder on the standard environment. Here is the code
def save_picture(form_picture,name):
picture_fn = name + '.jpg'
picture_path = os.path.join(app.instance_path, 'static/image/'+ picture_fn)
output_size = (1000,1000)
i = Image.open(form_picture)
i.thumbnail(output_size)
i.save(picture_path)
return picture_path
#app.route('/image/add', methods=['GET', 'POST'])
def addimage():
form = Form()
if form.validate_on_submit():
name = 'randomname'
try:
picture_file = save_picture(form.image.data,name)
return redirect(url_for('addimage'))
except:
flash("unsuccess")
return redirect(url_for('addimage'))
My question is if I change from standard to flex environment would it be possible to save to a static folder? If not what are the other hosting options that I should consider? Do you have any suggestions?
Thanks in advance.
following your's advice I'm changing to use Cloud Storage. i'm wondering what should i use from upload_from_file(), upload_from_filename() or upload_from_string(). the source_file takes data from form.photo.data from flask-wtform. i'm not successfully saving on the cloud storage yet. this is my code:
def upload_blob(bucket_name, source_file, destination_blob_name):
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file)
return destination_blob_name
#app.route('/image/add', methods=['GET', 'POST'])
def addimage():
form = Form()
if form.validate_on_submit():
name = 'randomname'
try:
filename = 'foldername/'+ name + '.jpg'
picture_file = upload_blob('mybucketname', form.photo.data, filename)
return redirect(url_for('addimage'))
except:
flash("unsuccess")
return redirect(url_for('addimage'))
I have successfully able to save file on google cloud storage by changing the save_picture function just in case anyone have trouble with this in the future:
app.config['BUCKET'] = 'yourbucket'
app.config['UPLOAD_FOLDER'] = '/tmp'
def save_picture(form_picture,name):
picture_fn = secure_filename(name + '.jpg')
picture_path = os.path.join(app.config['UPLOAD_FOLDER'], picture_fn)
output_size = (1000,1000)
i = Image.open(form_picture)
i.thumbnail(output_size)
i.save(picture_path)
storage_client = storage.Client()
bucket = storage_client.get_bucket(app.config['BUCKET'])
blob = bucket.blob('static/image/'+ picture_fn)
blob.upload_from_filename(picture_path)
return picture_path
The problem with storing it to some folder is that it would live on that one instance and other instances would not be able to access it. Furthermore, instances in GAE come and go, so you would lose the image eventually.
You should use Google Cloud Storage for this:
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id-here')
blob = bucket.get_blob('remote/path/to/file.txt')
blob.upload_from_string('New contents!')
https://googleapis.dev/python/storage/latest/index.html
With Flask and Appengine, Python3.7, I save files to a bucket in the following way, because I want to loop it for many files:
for key, upload in request.files.items():
file_storage = upload
content_type = None
identity = str(uuid.uuid4()) # or uuid.uuid4().hex
try:
upload_blob("f00b4r42.appspot.com", request.files[key], identity, content_type=upload.content_type)
The helper function:
from google.cloud import storage
def upload_blob(bucket_name, source_file_name, destination_blob_name, content_type="application/octet-stream"):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_file(source_file_name, content_type=content_type)
blob.make_public()
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
Changing from Google App Engine Standard Environment to Google App Engine Flexible Environment will allow you to write to disk, as well as to choose a Compute Engine machine type with more memory for your specific application [1]. If you are interested on following this path find all the relevant documentation from migrating a Python app here.
Nonetheless, as it was explained by user #Alex on his provided answer as instances are created (the number of instances is scaled up) or deleted (the number of instances is scaled down) according to your load, the better option in your particular case would be to use Cloud Storage. Find an example for uploading objects to Cloud Storage with Python here.

Download xlsx file passed from nodejs to frontend angularjs

I have a JSON array of objects that is a result of a function in nodejs. I use json2xls to convert that to an excel file, and it downloads to the server (not in a public folder, and is formatted correctly in Excel).
I would like to send a response to the frontend with the json results (to display as a preview) and show a button they can click to download the xlsx file OR display the JSON results and automatically download the file.
But I can't get it, and I've tried so many things I'm going crazy.
My controller code (the part that creates the xls file):
var xls = json2xls(results,{});
var today = (new Date()).toDateString('yyyy-mm-dd');
var str = today.replace(/\s/g, '');
var fileName = "RumbleExport_"+ str +".xlsx";
var file = fs.writeFileSync(fileName,xls,'binary');
res.download('/home/ubuntu/workspace/'+file);
The frontend controller:
vm.exportData = function(day, event, division) {
console.log('Export registrations button pressed.', vm.export);
//send the search parameters to the backend to run checks
$http.post('/api/exportData', vm.export).then(function(response){
vm.results = response.data;
console.log("Results",response);
vm.exportMessage = "Found " + vm.results.length + " registrations.";
})
.catch(function(error){
vm.exportError = error.data;
});
};
The view:
//display a button to download the export file
<a target="_self" file="{{vm.results}}" download="{{vm.results}}">Download Export File</a>
Someone please put me out of my misery. All the classes I've taken and none have covered this.
I FINALLY got it! And since I searched forever trying to make something work, I'll share the answer:
On the backend:
//save the file to the public/exports folder
var file = fs.writeFileSync('./public/exports/'+fileName,xls,'binary');
//send the results to the frontend
res.json(200).json({results:results, fileName: fileName});
On the frontend, use HTML to download a link to the file:
<a href="exports/{{fileName}}" download>Save File</a>

Dynamic content Single Page Application SEO

I am new to SEO and just want to get the idea about how it works for Single Page Application with dynamic content.
In my case, I have a single page application (powered by AngularJS, using router to show different state) that provides some location-based search functionalities, similar to Zillow, Redfin, or Yelp. On mt site, user can type in a location name, and the site will return some results based on the location.
I am trying to figure out a way to make it work well with Google. For example, if I type in "Apartment San Francisco" in Google, the results will be:
And when user click on these links, the sites will display the correct result. I am thinking about having similar SEO like these for my site.
The question is, the page content is purely depending on user's query. User can search by city name, state name, zip code, etc, to show different results, and it's not possible to put them all into sitemap. How google can crawl the content for these kind of dynamic page results?
I don't have experience with SEO and not sure how to do it for my site. Please share some experience or pointers to help me get started. Thanks a lot!
===========
Follow up question:
I saw Googlebot can now run Javascript. I want to understand a bit more of this. When a specific url of my SPA app is opened, it will do some network query (XHR request) for a few seconds and then the page content will be displayed. In this case, will GoogleBot wait for the http response?
I saw some tutorial says we need to prepare static html specifically for Search Engines. If I only want to deal with Google, does it mean I don't have to serve static html anymore because Google can run Javascript?
Thanks again.
If a search engine should come across your JavaScript application then we have the permission to redirect the search engine to another URL that serves the fully rendered version of the page.
For this job
You can either use this tool by Thomas Davis available on github
SEOSERVER
Or
you can use the code below which does the same job as above this code is also available here
Implementation using Phantom.js
We can setup a node.js server that given a URL, it will fully render the page content. Then we will redirect bots to this server to retrieve the correct content.
We will need to install node.js and phantom.js onto a box. Then start up this server below. There are two files, one which is the web server and the other is a phantomjs script that renders the page.
// web.js
// Express is our web server that can handle request
var express = require('express');
var app = express();
var getContent = function(url, callback) {
var content = '';
// Here we spawn a phantom.js process, the first element of the
// array is our phantomjs script and the second element is our url
var phantom = require('child_process').spawn('phantomjs',['phantom-server.js', url]);
phantom.stdout.setEncoding('utf8');
// Our phantom.js script is simply logging the output and
// we access it here through stdout
phantom.stdout.on('data', function(data) {
content += data.toString();
});
phantom.on('exit', function(code) {
if (code !== 0) {
console.log('We have an error');
} else {
// once our phantom.js script exits, let's call out call back
// which outputs the contents to the page
callback(content);
}
});
};
var respond = function (req, res) {
// Because we use [P] in htaccess we have access to this header
url = 'http://' + req.headers['x-forwarded-host'] + req.params[0];
getContent(url, function (content) {
res.send(content);
});
}
app.get(/(.*)/, respond);
app.listen(3000);
The script below is phantom-server.js and will be in charge of fully rendering the content. We don't return the content until the page is fully rendered. We hook into the resources listener to do this.
var page = require('webpage').create();
var system = require('system');
var lastReceived = new Date().getTime();
var requestCount = 0;
var responseCount = 0;
var requestIds = [];
var startTime = new Date().getTime();
page.onResourceReceived = function (response) {
if(requestIds.indexOf(response.id) !== -1) {
lastReceived = new Date().getTime();
responseCount++;
requestIds[requestIds.indexOf(response.id)] = null;
}
};
page.onResourceRequested = function (request) {
if(requestIds.indexOf(request.id) === -1) {
requestIds.push(request.id);
requestCount++;
}
};
// Open the page
page.open(system.args[1], function () {});
var checkComplete = function () {
// We don't allow it to take longer than 5 seconds but
// don't return until all requests are finished
if((new Date().getTime() - lastReceived > 300 && requestCount === responseCount) || new Date().getTime() - startTime > 5000) {
clearInterval(checkCompleteInterval);
console.log(page.content);
phantom.exit();
}
}
// Let us check to see if the page is finished rendering
var checkCompleteInterval = setInterval(checkComplete, 1);
Once we have this server up and running we just redirect bots to the server in our client's web server configuration.
Redirecting bots
If you are using apache we can edit out .htaccess such that Google requests are proxied to our middle man phantom.js server.
RewriteEngine on
RewriteCond %{QUERY_STRING} ^_escaped_fragment_=(.*)$
RewriteRule (.*) http://webserver:3000/%1? [P]
We could also include other RewriteCond, such as user agent to redirect other search engines we wish to be indexed on.
Though Google won't use _escaped_fragment_ unless we tell it to by either including a meta tag; <meta name="fragment" content="!">or using #! URLs in our links.
You will most likely have to use both.
This has been tested with Google Webmasters fetch tool. Make sure you include #! on your URLs when using the fetch tool.

Upload file bigger than 40MB to Google App Engine?

I am creating a Google App Engine web app to "transform" files of 10K~50M
Scenario:
User opens http://fixdeck.appspot.com in web browser
User clicks on "Browse", select file, submits
Servlet loads file as an InputStream
Servlet transforms file
Servlet saves file as an OutputStream
The user's browser receives the transformed file and asks where to save it, directly as a response to the request in step 2
(For now I did not implement step 4, the servlet sends the file back without transforming it.)
Problem: It works for 15MB files but not for a 40MB file, saying: "Error: Request Entity Too Large. Your client issued a request that was too large."
Is there any workaround against this?
Source code: https://github.com/nicolas-raoul/transdeck
Rationale: http://code.google.com/p/ankidroid/issues/detail?id=697
GAE has a hard limits of 32MB for HTTP requests and HTTP responses. That will limit the size of uploads/downloads directly to/from a GAE app.
Revised Answer (Using Blobstore API.)
Google provides to the Blobstore API for handling larger files in GAE (up to 2GB). The overview documentation provides complete sample code. Your web form will upload the file to blobstore. The blobstore API then rewrites the POST back to your servlet where you can do your transformation and save the transformed data back in to the blobstore (as a new blob).
Original Answer (Didn't Consider Blobstore as an option.)
For downloading, I think GAE only workaround would be to break the file up in to multiple parts on the server, and then reassemble after downloading. That's probably not doable using a straight browser implementation though.
(As an alternative design, perhaps you could send the transformed file from GAE to an external download location (such as S3) where it could be downloaded by the browser without the GAE limit restrictions. I don't believe GAE initiated connections have same request/response size limitations, but I'm not positive. Regardless, you would still be restricted by the 30 second maximum request time. To get around that, you'd have to look in to GAE Backend instances and come up with some sort of asynchronous download strategy.)
For uploading larger files, I've read about the possibility of using HTML5 File APIs to slice the file in to multiple chunks for uploading, and then reconstructing on the server. Example: http://www.html5rocks.com/en/tutorials/file/dndfiles/#toc-slicing-files . However, I don't how practical a solution that really is due to changing specifications and browser capabilities.
You can use the blobstore to upload files as large as 2 gigabytes.
When uploading larger files, you can consider the file to be chunked into small sets of requests (should be less than 32MB which is the current limit) that Google App Engine supports.
Check this package with examples - https://github.com/pionl/laravel-chunk-upload
Following is a working code which uses the above package.
View
<div id="resumable-drop" style="display: none">
<p><button id="resumable-browse" class="btn btn-outline-primary" data-url="{{route('AddAttachments', Crypt::encrypt($rpt->DRAFT_ID))}}" style="width: 100%;
height: 91px;">Browse Report File..</button>
</div>
Javascript
<script>
var $fileUpload = $('#resumable-browse');
var $fileUploadDrop = $('#resumable-drop');
var $uploadList = $("#file-upload-list");
if ($fileUpload.length > 0 && $fileUploadDrop.length > 0) {
var resumable = new Resumable({
// Use chunk size that is smaller than your maximum limit due a resumable issue
// https://github.com/23/resumable.js/issues/51
chunkSize: 1 * 1024 * 1024, // 1MB
simultaneousUploads: 3,
testChunks: false,
throttleProgressCallbacks: 1,
// Get the url from data-url tag
target: $fileUpload.data('url'),
// Append token to the request - required for web routes
query:{_token : $('input[name=_token]').val()}
});
// Resumable.js isn't supported, fall back on a different method
if (!resumable.support) {
$('#resumable-error').show();
} else {
// Show a place for dropping/selecting files
$fileUploadDrop.show();
resumable.assignDrop($fileUpload[0]);
resumable.assignBrowse($fileUploadDrop[0]);
// Handle file add event
resumable.on('fileAdded', function (file) {
$("#resumable-browse").hide();
// Show progress pabr
$uploadList.show();
// Show pause, hide resume
$('.resumable-progress .progress-resume-link').hide();
$('.resumable-progress .progress-pause-link').show();
// Add the file to the list
$uploadList.append('<li class="resumable-file-' + file.uniqueIdentifier + '">Uploading <span class="resumable-file-name"></span> <span class="resumable-file-progress"></span>');
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-name').html(file.fileName);
// Actually start the upload
resumable.upload();
});
resumable.on('fileSuccess', function (file, message) {
// Reflect that the file upload has completed
location.reload();
});
resumable.on('fileError', function (file, message) {
$("#resumable-browse").show();
// Reflect that the file upload has resulted in error
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-progress').html('(file could not be uploaded: ' + message + ')');
});
resumable.on('fileProgress', function (file) {
// Handle progress for both the file and the overall upload
$('.resumable-file-' + file.uniqueIdentifier + ' .resumable-file-progress').html(Math.floor(file.progress() * 100) + '%');
$('.progress-bar').css({width: Math.floor(resumable.progress() * 100) + '%'});
});
}
}
</script>
Controller
public function uploadAttachmentAsChunck(Request $request, $id) {
// create the file receiver
$receiver = new FileReceiver("file", $request, HandlerFactory::classFromRequest($request));
// check if the upload is success, throw exception or return response you need
if ($receiver->isUploaded() === false) {
throw new UploadMissingFileException();
}
// receive the file
$save = $receiver->receive();
// check if the upload has finished (in chunk mode it will send smaller files)
if ($save->isFinished()) {
// save the file and return any response you need, current example uses `move` function. If you are
// not using move, you need to manually delete the file by unlink($save->getFile()->getPathname())
$file = $save->getFile();
$fileName = $this->createFilename($file);
// Group files by mime type
$mime = str_replace('/', '-', $file->getMimeType());
// Group files by the date (week
$dateFolder = date("Y-m-W");
$disk = Storage::disk('gcs');
$gurl = $disk->put($fileName, $file);
$draft = DB::table('draft')->where('DRAFT_ID','=', Crypt::decrypt($id))->get()->first();
$prvAttachments = DB::table('attachments')->where('ATTACHMENT_ID','=', $draft->ATT_ID)->get();
$seqId = sizeof($prvAttachments) + 1;
//Save Submission Info
DB::table('attachments')->insert(
[ 'ATTACHMENT_ID' => $draft->ATT_ID,
'SEQ_ID' => $seqId,
'ATT_TITLE' => $fileName,
'ATT_DESCRIPTION' => $fileName,
'ATT_FILE' => $gurl
]
);
return response()->json([
'path' => 'gc',
'name' => $fileName,
'mime_type' => $mime,
'ff' => $gurl
]);
}
// we are in chunk mode, lets send the current progress
/** #var AbstractHandler $handler */
$handler = $save->handler();
return response()->json([
"done" => $handler->getPercentageDone(),
]);
}
/**
* Create unique filename for uploaded file
* #param UploadedFile $file
* #return string
*/
protected function createFilename(UploadedFile $file)
{
$extension = $file->getClientOriginalExtension();
$filename = str_replace(".".$extension, "", $file->getClientOriginalName()); // Filename without extension
// Add timestamp hash to name of the file
$filename .= "_" . md5(time()) . "." . $extension;
return $filename;
}
You can also use blobstore api to directly upload to cloud storage. Blow is the link
https://cloud.google.com/appengine/docs/python/blobstore/#Python_Using_the_Blobstore_API_with_Google_Cloud_Storage
upload_url = blobstore.create_upload_url(
'/upload_handler',
gs‌​_bucket_name = YOUR.BUCKET_NAME)
template_values = { 'upload_url': upload_url }
_jinjaEnvironment = jinjaEnvironment.JinjaClass.getJinjaEnvironemtVariable()
if _jinjaEnvironment:
template = _jinjaEnvironment.get_template('import.html')
Then in index.html:
<form action="{{ upload_url }}"
method="POST"
enctype="multipart/form-data">
Upload File:
<input type="file" name="file">
</form>

Resources