Px matrix library for RGB led matrix 32x16 for total 30 modules for larger display - rgb

I am unable to increase the brightness using display.brightness(255)
anyone help me regarding this display.begin(8 or 4 or etc...) if i want to keep 30 modules of 32x16 .Am not understanding about scan pattern????
what is display.setMuxpattern()????
anyone please explain me about this
Thankyou in advance
to increase brightness i changed value to 255 in display.brightness(255) but there is not tha much change in it.
then i changed colour depth to 6 ****#define PxMATRIX_COLOR_DEPTH 6 **** then it started flickering but brightness increased.

Related

finding proportion/ratio for clamping a font size given a min/ideal/max font size and min/current/max screen width

I am trying to create a clamping formula (same logic from CSS) to make the typography more responsive in PowerApps. I have a minimum, maximum and an ideal size that is a dynamic calculation. Which gives us:
Max(min_,(Min(ideal_),max_)))
Now I am struggling to find that ratio. In my case the screen width will never go lower than 360px and the min font size for this example is 16px, the max 40px, when the screen is large/extra large (meaning anything above 900px in our case).
Now how can I represent a formula that calculates a value in between these two that considers the current width of the screen? This has very little to do with PowerApps is more of a math question and general responsive design, I just don't know how to do it :D
I could guess compound proportion as in:
16 px f -> 360px w
x px f -> current px w
40 px f -> > 900 px w
Is this logic right? What do I do now? This might look obvious to you so please try to guide through or give some video/link/article.
Thank you all.
For whoever wondering about this. I think I found the answer.
Max(minsize_,(Min((minsize_+(maxsize_-minsize_)*((App.Width - App.MinScreenWidth) / (maxscreenwidth_ - App.MinScreenWidth))),maxsize_)))
Taken and adapted from https://css-tricks.com/snippets/css/fluid-typography/

What do the channels do in CNN?

I am a newbie in CNN and I want to ask what does the channels do in SSD for example? For what reason they exist? For example 18X18X1024 (third number)?
Thanks for any answer.
The dimensions of an image can be represented using 3 numbers. For example, a color image in CIFAR-10 dataset has a height of 32 pixels, width of 32 pixels and is represented as 32 x 32 x 3. Here 3 represents the number of channels in your image. Color images have a channel size of 3 (usually RGB), while a grayscale image will have a channel size of 1.
A CNN will learn features of the images that you feed it, with increasing levels of complexity. These features are represented by the channels. The deeper you go into the network, the more channels you will have that represents these complex features. These features are then used by the network to perform object detection.
In your example, 18X18X1024 means your input image is now represented with 1024 channels, where each channel represents some complex feature/information about the image.
Since you are a beginner, I suggest you look into how CNNs work in general, before diving into object detection. A good start would be image classification using CNNs. I hope this answers your question. Happy learning!! :)

Control WVGA display with stm32f429-discovery LTDC

I am trying to output some data on the 7 inch TFT-LCD display (MCT070PC12W800480LML) using LCD-TFT display controller (LTDC 18 bits) on STM32F4.
LTDC interface setting are configured in CubeMx. In the program lcd data buffer is created with some values and it's starting address is mapped to the LTDC frame buffer start address.
At this moment display doesn't react to data sent by the LTDC. It only shows white and black strips, after i connect ground and power for digital circuit to the 3 volts source. VLED+ is connected to the 9 volts source. The VSYNC, HSYNC and CLOCK signals are generated by the LTDC and they match with specified values. I measured them on LCD strip, so the connection should be right. I also tried putting pulse on the LCD reset pin, but that doesn't make any sense.
The timing setting might be wrong.
LTDC clock is 33 MHz.
Here is the link to the diplay datasheet http://www.farnell.com/datasheets/2151568.pdf?_ga=2.128714188.1569403307.1506674811-10787525.1500902348 I saw some other WVGA displays using the same timing for synchronization signals, so i assume that timings are standard for that kind of displays.
Maybe signal polarity is wrong or i am missing something else. The program i am using now, worked on stm32f429-discovery build in LCD i just changed the timings. Any suggestions?
Thank you.
It could be something else, but I can see a problem with your timing values.
The back porch for both horizontal and vertical includes the sync pulses, but there must be a sync pulse width. My observation is that you have tried to get the total clocks for h = 1056 and v = 525 as per the data sheet by setting the sync pulses to 0. That won't work.
I would make the hsync pulse 20 and vysnc 10. The total clocks will be the same, but it is not critical that they match the spec sheet.

Increase Intensity of SCNLights

Hello I have a SCNScene that is the basis of my game. The lighting was tricky and to get the effect I wanted I ended up duplicating three lights three times. This increased the intensity of the lights to create the affect and colors I wanted. However I know that 9 lights all casting shadows has been taking a toll on my fps. Is there any way to increase the intensity of the lights like i did by duplicating them without destroying my fps?
Thanks!
what type of light do you have ? Do they have non-default attenuation values ? (see attenuationStartDistance, attenuationEndDistance and attenuationFalloffExponent).
You can try to increase the brightness of your lights colors if that's possible (if they're aren't already 100% white for instance).
Otherwise you can use shader modifiers. The SCNShaderModifierEntryPointLightingModel entry point will let you customize the effect of each light.
In iOS 10 & macOS 10.12, SCNLight now has an intensity: CGFloat property which'll let you multiply the brightness of each light.  Assuming you're not using PBR/IES, intensity acts as a permille multiplier— 1000 = 1×, 3000 = 3×, 100 = 0.1×, etc.  (When using PBR or IES lighting, intensity instead controls the luminous flux of the light.)
To triple the brightness of each SCNLight, simply do:
myLight1.intensity = 3000

RGB value detection and implementation

I'm writing an application that displays different color swatches to help people with color coordination. How can I find the RGB values of real world objects?
For example, one of the colors is Red Apple but obviously a red apple isn't just red. It has hints of other colors in it.
Well, it's not an easy task to be honest, but a good place to start would be with a digital camera and/or a flatbed scanner.
Once you have an image in the computer then the task is somewhat easier beacuse all you need is to use a picture / photo editing package such as photoshop or the gimp to sample a selection of colours before using them in your application.
once you have a few different samples, then you need to average them, and that's quite easy to do. Lets say you took 5 samples of RGB values:
255,50,10
250,40,11
253,51,15
248,60,13
254,45,20
You simply need to add up each component and divide by how many samples you took so:
Red = (255 + 250 + 253 + 248 + 254) / 5
Green = (50 + 40 + 51 + 60 + 45) / 5
Blue = (10 + 11 + 15 + 13 + 20) / 5
Now, if what your asking is how do I do this automatically in program code, that's a whole different kettle of fish, first you'll need something like a web cam, then you'll need to write code to capture images from the web-cam, then once you have your image you'll need not just the ability to pick colour, but to actually figure out where in the image the object you want to pick the colour from actually is.
For now, I'd look at using the first method, it's a bit manual I agree, but far easier and will get you started.
The image processing required to do the second maths has given software engineers & comp scientists headaches for years and is still not a perfect science... and that's before we even start thinking about the maths.
For each object, I would do it this way:
Use goolge images to search pictures of the object you want.
Select the one that have the most accurate color, say, to your idea of a "red apple" for example.
--you can skip 1 and 2 if you have a digital picture of the object.
Open that image in Paint; you can do it stroking the "Impr Pant" key on your keyboard, opening Paint, and then "ctrl+v" will paste the screenshoot in paint.
Select the pick color tool on Paint (the one like a dropper) and click on the image, just in the place with the color you want.
Select from the menu, "Colors -> Edit colors" and then in the Colors palette that opens, clic on "Define Custom Colors".
You got it, there RGB values are at your right.
There must be an easier way, but this will work.
If your looking for a programmatic solution then you would look into bitwise operations. The general idea here is you would read the image in it's binary roots and then you could logically convert the bits into RGB values. There are several methods for doing this depending on programming language. Here is a method for Actionscript3.
http://www.flashandmath.com/intermediate/rgbs/explanations.html
also if your looking for the average color look here, (for AS3)
http://blog.soulwire.co.uk/code/actionscript-3/extract-average-colours-from-bitmapdata
a related method and explanation for Java
Bitwise version of finding RGB in java

Resources