How to setup uploading binary objects through flask_restless endpoint? - file

I am working on a REST python application and I have picked flask_restless to build endpoints connected to the database. One of the tables I would like to manage is storing binary files as blobs (LargeBinary).
I have noticed, though, that flask_restless requires json data for POST requests. I tried to apply base64 to the binary file contents and wrap it with json, but ultimately flask_restless passed file contents to sqlalchemy as a string and the SQLite backend complained that it requires bytes input (quite rightly so).
I tried searching the interwebs for a solution, but either I am formulating my query incorrectly, or actually there is none.
So, is there a way to configure the endpoint managed with flask_restless to accept binary file as an attachment? Or rather the suggested solution would be to setup the endpoint for that particular table directly with flask (I did that before in another app), away from flask_restless?

It turns out that sending an attachment is not possible.
So I dug deeper into how to send base64-encoded attachments which would then be saved as blobs.
For that I used pre- and post-processing facility of flask_restless:
def pp_get_single_image(result=None, **kw):
import base64
result['image'] = base64.b64encode(result['image']).decode('utf8')
def pp_get_many_images(result=None, search_params=None, **kw):
result['objects'] = [pp_get_single_image(d) or d for d in result['objects']]
def pp_post_image_in(data=None, **kw):
import base64
data['image'] = base64.b64decode(data['image'])
def pp_post_image_out(result=None, **kw):
import base64
result['image'] = base64.b64encode(result['image']).decode('utf8')
postprocessors=dict(GET_SINGLE=[pp_get_single_image], GET_MANY=[pp_get_many_images], POST=[pp_post_image_out])
preprocessors=dict(POST=[pp_post_image_in])
manager = flask_restless.APIManager(app, flask_sqlalchemy_db=db)
manager.create_api(Image, methods=['GET', 'POST', 'DELETE'],
postprocessors=pp_image.postprocessors,
preprocessors=pp_image.preprocessors)

Related

Flask Restplus mutlipart/form-data model

Is it possible with Flask Restplus to create a model for a multipart/form-data request so that I can use it to validate the input with #api.expect?
I have this complex data structure for which I've created a api.namespace().model that has to be received together with a file. However when I tried to document the endpoint I noticed that this doesn't seem to be supported by Flask Restplus.
I've tried to find something along the lines of
parser = ns.parser()
parser.add_argument("jsonModel", type=Model, location="form")
parser.add_argument("file", type=FileStorage, location="files")
and
formModel = ns.model("myForm", {"jsonModel": fields.Nested(myModel), "file": fields.File})
But neither methods seem to support this kind of behavior.

How do we get the document file url using the Watson Discovery Service?

I don't see a solution to this using the available api documentation.
It is also not available on the web console.
Is it possible to get the file url using the Watson Discovery Service?
If you need to store the original source/file URL, you can include it as a field within your documents in the Discovery service, then you will be able to query that field back out when needed.
I also struggled with this request but ultimately got it working using Python bindings into Watson Discovery. The online documentation and API reference is very poor; here's what I used to get it working:
(Assume you have a Watson Discovery service and have a created collection):
# Programmatic upload and retrieval of documents and metadata with Watson Discovery
from watson_developer_cloud import DiscoveryV1
import os
import json
discovery = DiscoveryV1(
version='2017-11-07',
iam_apikey='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
url='https://gateway-syd.watsonplatform.net/discovery/api'
)
environments = discovery.list_environments().get_result()
print(json.dumps(environments, indent=2))
This gives you your environment ID. Now append to your code:
collections = discovery.list_collections('{environment-id}').get_result()
print(json.dumps(collections, indent=2))
This will show you the collection ID for uploading documents into programmatically. You should have a document to upload (in my case, an MS Word document), and its accompanying URL from your own source document system. I'll use a trivial fictitious example.
NOTE: the documentation DOES NOT tell you to append , 'rb' to the end of the open statement, but it is required when uploading a Word document, as in my example below. Raw text / HTML documents can be uploaded without the 'rb' parameter.
url = {"source_url":"http://mysite/dis030.docx"}
with open(os.path.join(os.getcwd(), '{path to your document folder with trailing / }', 'dis030.docx'), 'rb') as fileinfo:
add_doc = discovery.add_document('{environment-id}', '{collections-id}', metadata=json.dumps(url), file=fileinfo).get_result()
print(json.dumps(add_doc, indent=2))
print(add_doc["document_id"])
Note the setting up of the metadata as a JSON dictionary, and then encoding it using json.dumps within the parameters. So far I've only wanted to store the original source URL but you could extend this with other parameters as your own use case requires.
This call to Discovery gives you the document ID.
You can now query the collection and extract the metadata using something like a Discovery query:
my_query = discovery.query('{environment-id}', '{collection-id}', natural_language_query="chlorine safety")
print(json.dumps(my_query.result["results"][0]["metadata"], indent=2))
Note - I'm extracting just the stored metadata here from within the overall returned results - if you instead just had:
print(my_query) you'll get the full response from Discovery ... but ... there's a lot to go through to identify just your own custom metadata.

Programmatically emulating "gsutil mv" on appengine cloudstorage in python

I would like to implement a mv (copy-in-the-cloud) operation on google cloud storage that is similar to how gsutil does it (http://developers.google.com/storage/docs/gsutil/commands/mv).
I read somewhere earlier that this involves a read and write (download and reupload) of the data, but I cannot find the passages again.
Is this the correct way to move a file in cloud storage, or does one have to go a level down to the boto library to avoid copying the data over the network for renaming the file?
istream = cloudstorage.open(src, mode='r')
ostream = cloudstorage.open(dst, content_type=src_content, mode='w')
while True:
buf = istream.read(500000)
if not buf:
break
ostream.write(buf)
istream.close()
ostream.close()
Update: I found the rest api that supports copy and compose operations and much more. It seems that there is hope that we do not have to copy data across continents to rename something.
Useful Links I have found sofar ...
Boto based approach: https://developers.google.com/storage/docs/gspythonlibrary
GCS Clinet Lib: https://developers.google.com/appengine/docs/python/googlecloudstorageclient/
GCS Lib: https://code.google.com/p/appengine-gcs-client
RAW JSON API: https://developers.google.com/storage/docs/json_api
Use the JSON API, there is a copy method. Here is the official example for Python, using the Python Google Api Client lib :
# The destination object resource is entirely optional. If empty, we use
# the source object's metadata.
if reuse_metadata:
destination_object_resource = {}
else:
destination_object_resource = {
'contentLanguage': 'en',
'metadata': {'my-key': 'my-value'},
}
req = client.objects().copy(
sourceBucket=bucket_name,
sourceObject=old_object,
destinationBucket=bucket_name,
destinationObject=new_object,
body=destination_object_resource)
resp = req.execute()
print json.dumps(resp, indent=2)

Using bottle.py and blobstore GAE

I recently started using bottle and GAE blobstore and while I can upload the files to the blobstore I cannot seem to find a way to download them from the store.
I followed the examples from the documentation but was only successful on the uploading part. I cannot integrate the example in my app since I'm using a different framework from webapp/2.
How would I go about creating an upload handler and download handler so that I can get the key of the uploaded blob and store it in my data model and use it later in the download handler?
I tried using the BlobInfo.all() to create a query the blobstore but I'm not able to get the key name field value of the entity.
This is my first interaction with the blobstore so I wouldn't mind advice on a better approach to the problem.
For serving a blob I would recommend you to look at the source code of the BlobstoreDownloadHandler. It should be easy to port it to bottle, since there's nothing very specific about the framework.
Here is an example on how to use BlobInfo.all():
for info in blobstore.BlobInfo.all():
self.response.out.write('Name:%s Key: %s Size:%s Creation:%s ContentType:%s<br>' % (info.filename, info.key(), info.size, info.creation, info.content_type))
for downloads you only really need to generate a response that includes the header "X-AppEngine-BlobKey:[your blob_key]" along with everything else you need like a Content-Disposition header if desired. or if it's an image you should probably just use the high performance image serving api, generate a url and redirect to it.... done
for uploads, besides writing a handler for appengine to call once the upload is safely in blobstore (that's in the docs)
You need a way to find the blob info in the incoming request. I have no idea what the request looks like in bottle. The Blobstoreuploadhandler has a get_uploads method and there's really no reason it needs to be an instance method as far as I can tell. So here's an example generic implementation of it that expects a webob request. For bottle you would need to write something similar that is compatible with bottles request object.
def get_uploads(request, field_name=None):
"""Get uploads for this request.
Args:
field_name: Only select uploads that were sent as a specific field.
populate_post: Add the non blob fields to request.POST
Returns:
A list of BlobInfo records corresponding to each upload.
Empty list if there are no blob-info records for field_name.
stolen from the SDK since they only provide a way to get to this
crap through their crappy webapp framework
"""
if not getattr(request, "__uploads", None):
request.__uploads = {}
for key, value in request.params.items():
if isinstance(value, cgi.FieldStorage):
if 'blob-key' in value.type_options:
request.__uploads.setdefault(key, []).append(
blobstore.parse_blob_info(value))
if field_name:
try:
return list(request.__uploads[field_name])
except KeyError:
return []
else:
results = []
for uploads in request.__uploads.itervalues():
results += uploads
return results
For anyone looking for this answer in future, to do this you need bottle (d'oh!) and defnull's multipart module.
Since creating upload URLs is generally simple enough and as per GAE docs, I'll just cover the upload handler.
from bottle import request
from multipart import parse_options_header
from google.appengine.ext.blobstore import BlobInfo
def get_blob_info(field_name):
try:
field = request.files[field_name]
except KeyError:
# Maybe form isn't multipart or file wasn't uploaded, or some such error
return None
blob_data = parse_options_header(field.content_type)[1]
try:
return BlobInfo.get(blob_data['blob-key'])
except KeyError:
# Malformed request? Wrong field name?
return None
Sorry if there are any errors in the code, it's off the top of my head.

Google App Engine Example of Uploading and Serving Arbitrary Files

I'd like to use GAE to allow a few users to upload files and later retrieve them. Files will be relatively small (a few hundred KB), so just storing stuff as a blob should work. I haven't been able to find any examples of something like this. There are a few image uploading examples out there but I'd like to be able to store word documents, pdfs, tiffs, etc. Any ideas/pointers/links? Thanks!
The same logic used for image uploads apply for other archive types. To make the file downloadable, you add a Content-Disposition header so the user is prompted to download it. A webapp simple example:
class DownloadHandler(webapp.RequestHandler):
def get(self, file_id):
# Files is a model.
f = Files.get_by_id(file_id)
if not f:
return self.error(404)
# Set headers to prompt for download.
headers = self.response.headers
headers['Content-Type'] = f.content_type or 'application/octet-stream'
headers['Content-Disposition'] = 'attachment; filename="%s"' % f.filename
# Add the file contents to the response.
self.response.out.write(f.contents)
(untested code, but you get the idea :)
It sounds like you want to use the Blobstore API.
You don't mention if you are using Python or Java so here are links to both.
I use blobstore API that admits any file upload/download up to 50 MB.

Resources