I have successfully implemented video sharing in my app using react native and firebase, but I want to ensure videos being stored are no more than 1080x1080 (maybe 720 depending how it looks).
Videos are max 8 seconds long, I am trying my best to keep them under 5MB each if possible. I was able to do some compressing on the client side (crop to square/trim), but I am hoping to be able to compress the videos even more without losing that quality via cloud functions (storage trigger).
After doing some looking around, it looks like Moviepy is a good option, but it use's python and I am not sure how I can use this script inside of a cloud function storage trigger.
Here is what that looks like:
//Not sure how this will import
import moviepy.editor as mp
//Can I get the video here from the bucket path in a cloud function?
clip = mp.VideoFileClip("video-stored.mp4")
clip_resized = clip.resize(height=1080) # make the height 1080px ( According to moviePy documenation The width is then computed so that the width/height ratio is conserved.)
//resize video, then we need to store it in the same location (same file path)
clip_resized.write_videofile("video-stored-resized.mp4")
I would love to hear some suggestions regarding video compressing via a cloud function and thoughts on using the above script/module with cloud functions.
At this point, firebase functions does not support languages other than node.js.
Thus, there are 2 solutions.
if you would like to keep using moviepy.
Writing a part to call moviepy-related apis when firebase storage is triggered in node.js and having the python api in any preferred environments. (I guess you should use pubsub provided by gcp to call the python apis)
writing all parts in node.js
There is a great module called fluent-ffmepg in node.js too but I know adding a watermark with the module is not as easy as moviepy is...
By the way, when I tried to combine vids in firebase functions, I was not able to make it maybe because of the limitation of the environment.
So, I personally recommend the very first solution: ofc it depends on your situation such as how big your video files are.
Related
In the past, I have built a custom component (REACT based) for the Streamlit-Framework which lets the user record audio inside a web browser of choice. Please have a look at the current version of streamlit-audio-recorder here.
As a mediocre web developer, I did not succeed in converting audio data stored in the browser's cache (audio-blob object) so that I can return it to the Streamlit backend.
What I have tried so far & my thought process:
There exist various scripts that enable saving audio to a local disk. However, none of these solutions work in an online-deployed scenario. (the program would save to the server's disk instead of the user's). This is why I came to the conclusion that this issue requires a solution that uses the audio data which is stored in the user's browser cache after being recorded.
The data stored in this cache via the audio-blob format, can not directly be passed back to python as a return variable and needs to be converted to an "environment agnostic datatype" (I tried binary base64). This conversion's complexity scales exponentially with the length of the audio data. Therefore I considered splitting the audio-blob into slices which can then be converted, aggregated and returned to Python. However, this process of splitting and concatenating WAV-audio blobs was not possible for me to implement due to the data structure/metadata inside the wav-file and the lack of libraries that would enable audio-blob slicing etc.
Does somebody know of a more elegant and performant solution? This would enable to finalize the audio recorder component and provide immense value to the Streamlit community which currently lacks comparable functionality.
I've read a few threads where this is discussed. Shai's response has always been that files can only be read, but not written into shared locations
Perhaps saving other type of files isn't so simple but shouldn't there be an option for saving pictures in CN1?
I haven't seen the Whatsapp Clone code, but if it truly is a clone shouldn't it have the option to share pictures (and possibly files). Or is it a simple text chat that perhaps shares pictures that can never be saved outside the app?
I also read somewhere (6 months ago) that Shai said that this should be a feature of CameraKit. Does this mean that this feature is in development? If so, that would be great. But having an ETA would also be great to align with our own devs
If it isn't being developed, can I at least know if this is something I can develop natively within CN1?
We expose the full file system so you can write to any place the native App can write to. Native apps don't have write access to the gallery directory and need to explicitly request it to put a file there. This can be accomplished easily in any external cn1lib (e.g. we could do it in camera kit) but haven't done it for camera kit or the whatsapp clone.
AFAIK there's no RFE open on this feature so I can't even tell you if it's assigned to a specific milestone.
I am planning to develop a hybrid mobile app using ionic. One of the features i need is offline google map. Is there a way how to do it?
It depends on the requirements of your application whether this will be possible. Are your users on "modern" devices A.K.A is HTML5 fully supported? Do your users need to view/edit the map globally, or just in a specific area? Does the map really need to be provided by google? I'll address some issues below to point you in possible takes on this problem.
Do you really need google maps? (Most optimal scenerio)
First of, do you really need google maps? Also relevant: how far do your users need to zoom their maps? If it can be any maps, and zooming is not really of high priority (if it is, including all map tiles will make the app eat all storage), you could probably use map-tiles as a packaged part of your app, and display them with a library like http://leafletjs.com/. The library is well documented, and provides a map-interface for a variety of map-providers. It will be do-able to configure this to use your own local map-tiles. You could include map-tiles for multiple zoom levels if necessary, and limit the min/max zoom-levels to the tiles you actually have available. This will make your maps work offline.
I can't or don't wan't to provide my own tiles make sure that you really looked into the option, there is systems out there that provide map-tiles you could use (check https://www.mapbox.com/ for example)
Okay, so you really don't want to do what I suggested. What are the options now? Javascript mapping-solutions typically render tiles based on the location of the map you want to see and the zooming level. These tiles are requested to the tile-provider. I do not know how to implement this for google exactly, you might need some research on this - I'll try to help you see a direction. There will be requests to get the tiles from the servers. I checked with http://maps.google.com what images are loaded when trying to navigate the map: (example (click)). Find out what url's are used in your situation, we will need these kind of url's later (just inspect the network tab in your browser console and see which requests are made when scrolling in your map). When we only need our users to work in a certain area when offline we could use service workers to cache the responses of these requests when we are online, and serve those caches when we are offline. Read more on service workers here (click).
Advantage: Real offline map-functionality for any tile you visited before (as long as your cache wasn't overflown, depending on your implementation of the service workers, and for service-worker supported browsers/devices).
Disadvantages: No support for tiles that were never put in the cache (AKA: never seen before). Another one: this will only work for devices that support service workers. Might be an option in situations where you either don't care about users using "older" devices, or where you can control the user's device choices. Note that using crosswalk could ease your developing efforts here, since you only have to consider one browser-runtime then: but crosswalk also doesn't support older devices.
However: This solution could be fine for people that will need to work in a specific area, which might be true for the case provided by #vipul-r If you or your users know in advance where they need their maps to work, you can instruct/help them in loading & caching their maps correctly.
If you can't work on either of these 2 solutions, then I highly doubt there will be a way to do it. I don't see any other way to the best of my knowledge.
I have successfully ported some Python code to App Engine that uses PIL's ImageFont and ImageDraw to generate a dynamic image. The only remaining problem is that the original code loads a TrueType font using a call like this:
titlefont = ImageFont.truetype("Verdana Bold.ttf", titlefontsize)
I can't just upload the font file and access it directly in GAE (at least I don't think I can?!). I guess it might be possible somehow to dump font data in a datastore blob, load that and feed it into PIL, but this seems less than elegant, and quite wasteful if everybody who uses PIL for image generation does the same thing. Currently I'm stuck with ImageFont.load_default() though, which gives pretty horrendous looking results.
Is there some clever way of working with alternative fonts in GAE PIL? Some additional API I'm missing that will return usable font objects?
Any file in your applications directory will be uploaded along with your application when you deploy it.
So yes, you should be able to "just" access any file you need by keeping it in or under your application directory, moving it there if necessary.
If you want to serve those files, that's something different. https://developers.google.com/appengine/docs/python/gettingstarted/staticfiles
But try including your .ttf file where your app can locate it and it should just work.
planning to launch a comic site which serves comic strips (images).
I have little prior experience to serving/caching images.
so these are my 2 methods i'm considering:
1. Using LinkProperty
class Comic(db.Model)
image_link = db.LinkProperty()
timestamp = db.DateTimeProperty(auto_now=True)
Advantages:
The images are get-ed from the disk space itself ( and disk space is cheap i take it?)
I can easily set up app.yaml with an expiration date to cache the content in user's browser
I can set up memcache to retrieve the entities faster (for high traffic)
2. Using BlobProperty
I used this tutorial , it worked pretty neat. http://code.google.com/appengine/articles/images.html
Side question: Can I say that using BlobProperty sort of "protects" my images from outside linkage? That means people can't just link directly to the comic strips
I have a few worries for method 2.
I can obviously memcache these entities for faster reads.
But then:
Is memcaching images a good thing? My images are large (100-200kb per image). I think memcache allows only up to 4 GB of cached data? Or is it 1 Mb per memcached entity, with unlimited entities...
What if appengine's memcache fails? -> Solution: I'd have to go back to the datastore.
How do I cache these images in the user's browser? If I was doing method no. 1, I could just easily add to my app.yaml the expiration date for the content, and pictures get cached user side.
would like to hear your thoughts.
Should I use method 1 or 2? method 1 sounds dead simple and straightforward, should I be wary of it?
[EDITED]
How do solve this dilemma?
Dilemma: The last thing I want to do is to prevent people from getting the direct link to the image and putting it up on bit.ly because the user will automatically get directed to only the image on my server
( and not the advertising/content around it if the user had accessed it from the main page itself )
You're going to be using a lot of bandwidth to transfer all these images from the server to the clients (browsers). Remember appengine has a maximum number of files you can upload, I think it is 1000 but it may have increased recently. And if you want to control access to the files I do not think you can use option #1.
Option #2 is good, but your bandwidth and storage costs are going to be high if you have a lot of content. To solve this problem people usually turn to Content Delivery Networks (CDNs). Amazon S3 and edgecast.com are two such CDNs that support token based access urls. Meaning, you can generate a token in your appengine app that that is good for either the IP address, time, geography and some other criteria and then give your cdn url with this token to the requestor. The CDN serves your images and does the access checks based on the token. This will help you control access, but remember if there is a will, there is a way and you can't 100% secure anything - but you probably get reasonably close.
So instead of storing the content in appengine, you would store it on the cdn, and use appengine to create urls with tokens pointing to the content on the cdn.
Here are some links about the signed urls. I've used both of these :
http://jets3t.s3.amazonaws.com/toolkit/code-samples.html#signed-urls
http://www.edgecast.com/edgecast_difference.htm - look at 'Content Security'
In terms of solving your dilemma, I think that there are a couple of alternatives:
you could cause the images to be
rendered in a Flash object that would
download the images from your server
in some kind of encrypted format that
it would know how to decode. This would
involve quite a bit of up-front work.
you could have a valid-one-time link
for the image. Each time that you
generated the surrounding web page,
the link to the image would be
generated randomly, and the
image-serving code would invalidate
that link after allowing it one time. If you
have a high-traffic web-site, this would be a very
resource-intensive scheme.
Really, though, you want to consider just how much work it is worth to force people to see ads, especially when a goodly number of them will be coming to your site via Firefox, and there's almost nothing that you can do to circumvent AdBlock.
In terms of choosing between your two methods, there are a couple of things to think about. With option one, where are are storing the images as static files, you will only be able to add new images by doing an appcfg.py update. Since AppEngine application do not allow you to write to the filesystem, you will need to add new images to your development code and do a code deployment. This might be difficult from a site management perspective. Also, serving the images form memcache would likely not offer you an improvement performance over having them served as static files.
Your second option, putting the images in the datastore does protect your images from linking only to the extent that you have some power to control through logic if they are served or not. The problem that you will encounter is that making that decision is difficult. Remember that HTTP is stateless, so finding a way to distinguish a request from a link that is external to your application and one that is internal to your application is going to require trickery.
My personal feeling is that jumping through hoops to make sure that people can't see your comics with seeing ads is solving the prolbem the wrong way. If the content that you are publishing is worth protecting, people will flock to your website to enjoy it anyway. Through high volumes of traffic, you will more than make up for anyone who directly links to your image, thus circumventing a few ad serves. Don't try to outsmart your consumers. Deliver outstanding content, and you will make plenty of money.
Your method #1 isn't practical: You'd need to upload a new version of your app for each new comic strip.
Your method #2 should work fine. It doesn't automatically "protect" your images from being hotlinked - they're still served up on a URL like any other image - but you can write whatever code you want in the image serving handler to try and prevent abuse.
A third option, and a variant of #2, is to use the new Blob API. Instead of storing the image itself in the datastore, you can store the blob key, and your image handler just instructs the blobstore infrastructure what image to serve.