I'm using Google Cloud Platform (App Engine). I have a High Performance Image Store [HPIS] url for an animated GIF (example below). I know there are some URL arguments you can provide to manipulate the image, eg =s1600 or =s128-c.
I was wondering if there's an argument to prevent a GIF from animating. Maybe just showing the first frame of the GIF or something. Also, what other arguments are out there?
https://lh3.googleusercontent.com/fh7nuo67JhRn84I7hQ5hWjsi9e9WaH8Lq3JNUCAWsu5_kcp0HozkGKQO2c3KV_1CN_5cmgs3P0oNY3--Ejp8T9goDMy3Y75cig
Over the years I've been able to uncover the following "extra features" that HPIS has to offer... the option you want is -k which will stop the animation:
-bXX -- border pixel size (border color depends on image color?)
-c -- crop center
-hXXXX -- height
-h -- (without XXX) puts white padding around border
-d -- download
-g -- Google+ panorama XML?
-k -- stop animation
-n -- crop from... somewhere between top and center? (requires -hXXX -wYYYY)
-p -- crop from top?
-rXXX -- rotate in degrees (90/180/270)
-sXXXX -- size to best fit
-wYYYY -- width
-v[0|1|2|3] -- quality level/file size (highest to lowest)
Other combos: -hXXXXX-wYYYY-s -- stretch to fit (only some images will stretch?)
would love to hear from anyone else that has uncovered more of these gems?
Related
when i use google sheet image function,
then i found image's resoltion lower.
so i test 3 case.
case1 : image function use, hosting site 1(not google)
case2 : image function use, hosting site 2(not google)
case3 : google sheet image insert(insert -> image)
and i found only case3, there isn't low resoultion problem.
how can i do?
*here google sheet url that i tested
https://docs.google.com/spreadsheets/d/1IB9yMDXFrSZDUPbIGFy92BUWszd1Qzx2kdJEuLMpBe8/edit?usp=sharing
there are modes
mode – [ OPTIONAL – 1 by default ] – The sizing mode for the image
1 resizes the image to fit inside the cell, maintaining aspect ratio.
2 stretches or compresses the image to fit inside the cell, ignoring aspect ratio.
3 leaves the image at original size, which may cause cropping.
4 allows the specification of a custom size.
in your case, you should be using:
=IMAGE(B2, 3)
but keep in mind that the original size is 1266×420
so to get the original it would be:
=IMAGE(B2, 4, 420, 1266)
your C2 cell dimensions are 572×200 so for such a small cell it's best to use a smaller source. you can use https://andrew.hedges.name/experiments/aspect_ratio/
and some online resizing tool like https://picresize.com/
https://i.imgur.com/ptwnpnG.png
=IMAGE(B2, 4, 189, 570)
I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing
How do I set all display edges in a model to hard (in Maya 2017)?
I found a MEL script in a different post that lets you select all hard edges
(this one: polySelectConstraint -m 3 -t 0x8000 -sm 1;),
but I want to turn all display edges to hard, not just select them. I want to do this because I didn't build my model out of primitives and it's overly complex, (importing from SketchUp). The only way I can think to do this is individually select each and every mesh component and set it manually. But I feel there must be a MEL script that might do the trick, does anyone know of one?
the relevant command is polySoftEdge -- if you set the angle to something lower than the angle between the faces the edge will be hard; if you set it to something higher it will be soft.
I have mp4 video which has around 50000 frames of size 1920x720. I have to remove a specific area in the video (all frames). Can you suggest a method in MATLAB?
Specify a ROI(Region of Interest) for each individual frame of the video, where the ROI is the specific area that you wanted to remove. Quite simple. Hope my advice helped. If u are still not sure, comment on this answer, I will add in more hints.
If you read the video file frame-by-frame, then it is a simple matter of matrix-indexing to remove a specific area (make it black as you said):
for i=1:numFrames
% read next frame
frame = <.. get i-th frame..>;
% black out region
frame(100:200, 300:350) = 0;
end
If the frames are RBG, just adjust indexing appropriately: frame(a:b,c:d,:)=0
I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..