I have created a sample using grafika code where I am creating the video from camera feeds. In grafika sample there is a method to drawExtra while passing data to muxer which add dots in vidoe. I want to add water mark in place of this at left top corner.
Please suggest.
if (mFileSaveInProgress && showCam)
{
Log.e(TAG, "drawFrame saving to new video");
mEncoderSurface.makeCurrent();
GLES20.glViewport(0, 0,720, 1280);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
drawExtra(mFrameNum, viewWidth, viewHeight);
mCircEncoder.frameAvailableSoon();
mEncoderSurface.setPresentationTime(mvideoTexture.getTimestamp());
mEncoderSurface.swapBuffers();
}
Related
I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.
I used the (new) GUI Builder and inserted an image (by way of adding a Label). However, it appears too big. Is there anyway I can scale and control the size? (I saw something which points to cloudinary but that seems too complicated. I just want to simply scale down the image.)
There are several ways to resize images in Codename One and I will mention few below:
1.
Use MultiImages in the GUI Builder. With this multiple sizes of images are generated from one image based on the sizes you specified. In your GUI Builder, Click Images -> Add Multi Images -> Select your image -> Check Preserve Aspect Ratio -> Increase the % that represents the percentage of the screen width you want the image to occupy. Set any DPI you don't require to 0.
2.
Use ScaledImageLabel or ScaledImageButton, it will resize the image the fill available space the component is occupying.
3.
Scale the image itself in code (This is not efficient, though):
public static Image getImageFromTheme(String name) {
try {
Resources resFile = Resources.openLayered("/theme");
Image image = resFile.getImage(name);
return image;
} catch (IOException ioe) {
//Log.p("Image " + name + " not found: " + ioe);
}
return null;
}
Image resizedImage = getImageFromTheme("myImage").scaledWidth(Math.round(Display.getInstance().getDisplayWidth() / 10)); //change value as necessary
4.
Mutate the image (Create an image from another image).
i'm trying to build a telemetry software using Winform and Devexpress library. Specifically i'm working on a Line Chart Control and what i would like to do is to configure the chart so that it is able to display a data changing in real time.
The graph is generated reading some external sensors that send informations at a rate of 10 values per second.
This is my code for initialize the chart:
series1 = new Series("test test", ViewType.Line);
chartControl1.Series.Add(series1);
series1.ArgumentScaleType = ScaleType.Numerical;
((LineSeriesView)series1.View).LineMarkerOptions.Kind = MarkerKind.Triangle;
((LineSeriesView)series1.View).LineStyle.DashStyle = DashStyle.Dash;
((XYDiagram)chartControl1.Diagram).EnableAxisXZooming = true;
chartControl1.Legend.Visibility = DefaultBoolean.False;
chartControl1.Titles.Add(new ChartTitle());
chartControl1.Titles[0].Text = "A Line Chart";
chartControl1.Dock = DockStyle.Fill;
And this is the one that add a new point and remove the first point available so that the amount of points in my chart is always the same (after a minimum amount of points is reached) and it keeps updating itself displaying the last X seconds of values and discarding the old values.
series1.Points.RemoveRange(0, 1);
series1.Points.Add(new SeriesPoint(time, value));
...
AxisXRange.SetMinMaxValues(newFirstTime, time);
AxisRange is the following
Range AxisXRange
{
get
{
SwiftPlotDiagram diagram = chartControl1.Diagram as SwiftPlotDiagram;
if (diagram != null)
return diagram.AxisX.VisualRange;
return null;
}
}
**The problem ** is that this code works temporarily. After some seconds, the chart stop working and a big red cross is displayed over it.
Is there something that i'm missing with its configuration?
Do you know any better way to realize my task?
Any help would be appreciated.
Thank you
I think you doing it nearly right. Devexpress has an Article about RealTime-Charts. They doing it the same way but using a timer for updating the data. Maybe this would fix your painting problems.
I am displaying videos on a form but the video is always stretched to a square. I can't get hold of any video component to get it's true size. This is the code to display video:
imageVideoContainer = new Container(new BorderLayout(BorderLayout.CENTER_BEHAVIOR_SCALE)) {
protected Dimension calcPreferredSize() {
return new Dimension(Display.getInstance().getDisplayWidth(), Display.getInstance().getDisplayWidth());
}
};
media = MediaManager.createMedia(FileSystemStorage.getInstance().getAppHomePath() + movePath, true);
mp = new MediaPlayer(media);
mp.setAutoplay(true);
imageVideoContainer.add(BorderLayout.CENTER, mp);
container = new Container(new BoxLayout(BoxLayout.Y_AXIS));
container.add(BorderLayout.centerAbsolute(imageVideoContainer));
If I don't overwrite the calcPreferredSize it doesn't display at all. Any help appreciated. I've tried debugging to look into Media Player to get something that has a size but can't find anything.
The problem is that until the video is loaded the size isn't there. So when you add it to the form it's preferred size will be 0.
You then add it to center absolute which requires preferred size to position/size the video. A solution can be to start the video then call revalidate() to redo the layout and position the video correctly.
Using the FFmpeg C API I have encoding and decoding a video working. However, the re-encoded video stream does not maintain the original video's orientation (rotation). So vertical videos have been flipped horizontal.
I’m not sure how to resolve this. Is there a metadata field that gets set? Using MediaInfo I see the original video has a metadata field ’Rotation : 90°’ and the new video does not.
Or does each encoded frame need to be rotated vertically?
I’ve looked at the decode frame's side_data but it is empty.
for (j = 0; j < decoded_frame->nb_side_data; j++) {
AVFrameSideData *sd = decoded_frame->side_data[j];
if(sd->type == AV_FRAME_DATA_DISPLAYMATRIX) {
LOGI("=> displaymatrix: rotation of %.2f degrees", av_display_rotation_get((int32_t *)sd->data));
}
}
I resolved this by adding 'Rotation' to the output video stream's metadata.
av_dict_copy(&output_stream->metadata, input_stream->metadata, AV_DICT_DONT_OVERWRITE);
There is a good explanation of the rotation metadata field here:
Correct Smartphone Video Orientation and How To Rotate iOS and Android Videos with ffmpeg