I'm writing code using Python for detecting face using live feed from my MacBook camera. Code does detect the camera but it turns off after 2 seconds and doesn't get any video feed. Below is my code:
detector = VideoObjectDetection()
detector.setModelTypeAsYOLOv3()
detector.setModelPath(os.path.join(execution_path , "yolo.h5"))
detector.loadModel()
camera = cv2.VideoCapture(0) --This turn camera light
video_path=detector.detectObjectsFromVideo(camera_input=output_file_path=os.path.join(execution_path, "camera_detected_1"), frames_per_second=29, log_progress=True)
detector.detect
Let me know what I need to do differently.
Related
First, I tried Huawei Face Liveness Detection. With the sample code, it works.
Next, I tried CameraView. Also, by just following the sample code, I am able to perform frame processing, achieving face detection and face recognition.
<com.otaliastudios.cameraview.CameraView
app:cameraFacing="front"
android:id="#+id/cameraView"
app:cameraEngine="camera2"
app:cameraPreview="glSurface"
android:keepScreenOn="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:visibility="visible"
app:cameraAudio="off"
app:cameraExperimental="true">
</com.otaliastudios.cameraview.CameraView>
Question: How to integrate Huawei Face Liveness Detection into CameraView?
Provided the Face Liveness Detection code below, I tried changing the view container (mPreviewContainer as below), but it just throws error and app exits.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_liveness_custom_detection);
mPreviewContainer = findViewById(R.id.surface_layout); //<------ Change this to CameraView
mlLivenessDetectView = new MLLivenessDetectView.Builder()
.setContext(this)
.setFaceFrameRect(new Rect(0, 0, 0, 200))
.setDetectCallback(new OnMLLivenessDetectCallback() {
...
}
I am curious how to integrate the Huawei Face Liveness Detection into CameraView (or even normal Camera2 or CameraX)? Can the HMS take the input frames from CameraView, instead of opening another camera?
P.S.:
The first error appeared (out of the lengthy message):
I/BufferQueue: [unnamed-11129-0](this:0x70859fb800,id:0,api:0,p:-1,c:-1) BufferQueue core=(11129:com.example.cv1)
E/AndroidRuntime: FATAL EXCEPTION: CameraViewEngine
Process: com.example.cv1, PID: 11129
com.otaliastudios.cameraview.CameraException
at com.otaliastudios.cameraview.engine.Camera2Engine$2.onDisconnected(Camera2Engine.java:435)
at android.hardware.camera2.impl.CameraDeviceImpl$7.run(CameraDeviceImpl.java:252)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:226)
at android.os.HandlerThread.run(HandlerThread.java:65)
E/CameraEngine: EXCEPTION: Handler thread is gone. Replacing.
E/CameraEngine: EXCEPTION: Scheduling on the crash handler...
Update:
Please kindly refer to ML Kit Face Verification. It recognizes and extracts key features of the face in the template, compares the features with those of the face in the input image, and then determines whether the two faces belong to the same person based on their similarity.
To achieve liveness detection and face detection/face recognition, there are two services we need: liveness detection service and face detection (Actually face comparison service, which will be supported in 2021.) Currently, HMS Liveness Detection does not support the method of CameraView (by taking the input frames) to achieve face recognition. You may try this two services: Facial recognition (LocalAuthentication Engine) or Facial comparison (HiAI Engine).
Q: Can the HMS take the input frames from CameraView, instead of
opening another camera?
No, it cannot take input frames from CameraView. Because the liveness detection is a multi-frame detection solution. Currently, the logic of frame sending is encapsulated. You app only needs to apply for the camera permission and use the camera on the device for identification or detection.
I am trying to play an audio file from a remote URL. The code works alright but the problem is when I tap on the play button, the app hangs for sometime before the audio plays. My code is shown below. How can I optimise this for a better experience.
if(!isDraft)
{
toPlay = URL+"/getFile/"+code+"_audioFile";
}
final Media mp = MediaManager.createMedia(toPlay, false);
f.addComponent("Center", new MediaPlayer(mp));
mp.play();
f.getToolbar().setBackCommand("", ed -> {
mp.pause();
mp.cleanup();
caseForm.showBack();
I'm new to using codename one and I can not understand how we can take a picture from the camera using captureImage (); from the camerakit library.
I know it's possible with the Capture API (Capture.capturePhoto ();) but this library uses an application to take the photo and I want to do this directly
I created a button :
FloatingActionButton capture_button =
FloatingActionButton.createFAB(FontImage.MATERIAL_CAMERA);
capture_button.bindFabToContainer(hi, CENTER, BOTTOM);
capture_button.addActionListener(e -> {
ck.captureImage();
.............
and after that I tried to get my picture from the onImage function but it does not work.
#Override
public void onImage(CameraEvent ev) {
try {
byte[] jpegData = ev.getJpeg();
String str = new String(jpegData);
InputStream stream = FileSystemStorage.getInstance().openInputStream(jpegData);
OutputStream out = Storage.getInstance().createOutputStream("MyImage.jpg");
Util.copy(stream, out);
Util.cleanup(stream);
Util.cleanup(out);
StorageImage out = StorageImage.create("MyImage.jpg", jpegData, -1, -1);
............................
}
the byte array is empty. Help please.
Camera Kit broke a bit after its release due to changes in Camera Kit which is still not 1.0 level. This is tracked in this issue. Camera kit was supposed to reach 1.0 status months ago but still hasn't reached that point. We
are waiting for it to be at 1.0 level so we can make fixes against a stable version.
We also need a bit of time/resources to do that work which is something we are sorely lacking.
I'm pretty new to OpenCV and I'm trying to get my bearings by looking at, and running, sample code.
One of the sample programs that I was looking at is a program for displaying webcam video. Here are the important lines (the program doesn't execute farther than this):
// Make frame.
CvCapture* capture = cvCaptureFromCAM(0);
if(!capture) {
printf("Webcam not initialized....");
}
// Display video in frame.
Unfortunately, the if statement always executes. For some reason, capture is not initialized.
Even stranger, when I run the program, it even gives me a GUI to select the webcam that I want to use:
However, even after I select the webcam, capture is not initialized!
What does this mean? How do I fix this?
Thanks for any suggestions.
It is possible that OpenCV cannot access the webcam until after you select it. In that case, try looping until the webcam is available:
CvCapture *capture = NULL;
do {
// you could also try passing in CV_CAP_ANY or -1 instead of 0
capture = cvCaptureFromCAM(0);
} while (!capture);
If this still doesn't work, call cvErrorStr(cvGetErrStatus()) to get a string explaining the error.
This is going to be one of those awkward questions looking for an answer that probably doesn't exist, but here goes.
I've been developing some simple games using Corona and whilst the functionality seems to work pretty well across most of the physical devices I have tested on, the one main issue is the layout. I know you can't really build for every single device perfectly, but I'm wondering if there is a common method to make an app look good across as many screens as possible. I have access to these devices
iPad 1 & 2: 4:3 (1.33)
iPhone 960 × 640 3:2 (1.5)
iPhone 480x320 3:2 (1.5)
Galaxy Nexus 16:9 (1.77)
From what I have seen, people aim to use 320x480 as a scaled resolution and then let Corona upscale to the correct device resolution (with any #2x images as required) but this leads to letterboxing or cropping depending on the config.lua scale setting. Whilst it does scale correctly, having a letterbox isn't great.
So would I be best to not specify a width&height in the config file, but instead to use some sort of screen check at first to look for 1.33 / 1.5 / 1.77 aspect ratios? Surely with the whole point of Corona SDK, there would be some sort of 'typical' setup that developers use for the start of any new project?
Thank you
It seems that I have found a pretty good solution based on this forum post on the Ansca website: http://developer.anscamobile.com/forum/2012/03/12/understanding-letterbox-scalling
In summary, the config.lua should look like this:
application = {
content = {
width = 320,
height = 480,
scale = "letterbox",
xAlign = "center",
yAlign = "center",
imageSuffix = {
["#2x"] = 2,
},
}
}
Create background images at 360*570 for older devices. 320x480 screens will crop the image slightly and it will scale nicely for older Android devices.
Create background images at 1140*720 for iPad and iPhone retina - again these will scale on Android and be slightly cropped on iOS.
As an example, where you would normally create a 320x480 image and display it with:
local bg = display.newImageRect("bg.png",320x480)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
.. instead create a 360x570 background and use the following code:
local bg = display.newImageRect("bg.png",360x570)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
This is just a summary, so check the link for more detailed instructions.
Well, you CAN use a number slightly off 2 for the scaling if you want correct images for the different devices. Ex:
application =
{
content =
{
width = 640,
height = 960,
scale = "zoomEven",
imageSuffix =
{
["-iphone3"] = 0.5,
["-ipad2"] = 1.066,
["-ipad3"] = 2.133,
},
}
}
In which "background.png" would be a 640x960 image for the iphone4, while "background-iphone3.png" would be 320x480 (you don´t need this, but it will reduce memory requirement for iphone3 applications). "background-ipad3.png" would need to be 1536x2048 (and half that for -ipad2).
Of course it doesn´t solve the aspect ratio for screen positioning, but it solves it for all other gfx related problems. Remember to use display.newImageRect, not display.newImage or you won´t see any difference.