Arduino - is it possible to do some scrolling text? - rgb

is it possible to do some scrolling text?
RGB led matrix
red led matrix only
can i use this WS2812 rgb led matrix instead of MAX7219 to build RGB scrolling text? because i want to build a project like multicolors of scrolling text just like in the video of MAX7219. but in the video of MAX7219 is red led only.
i also want to know what is the latest arduino uno R3 using the chip is ATmega16U2 or ATmega328
thanks

Yes. You can use either solution for scrolling text.
The primary MCU of all Arduino Unos is the ATmega328P. The ATmega16U2 of the Uno R3 is used as a USB-UART bridge.

Related

How can I create data pulses to control a WS2818b LED strip using the MSP430FR4133

Im trying to write code so that I can control the LED strip. All I want to start with is to learn how to turn the first 1 or 2 LEDs a certain colour. I have no idea how to actually implement this. Particularly unsure about how to output a data pulse from a GPIO pin.
I think i should use PWM to create the data pulse but im struggling to get started. I know that I need to send 24 bits to control the LED strip but not sure how this is possible using a GPIO pin. Any advice on where I can look to get started would be really appreciated. There seems to be lots of resources for this on the Arduino but struggling to find much for the MSP430.

Convert a 3d graphic into rgb array

Im working on a project that has 3d led cube (256*256*256).What software do i use to design the 3D model and for projecting the same on the LED cube.The LED used is RGB but the answer need not be for RGB. Even monochromatic LED is also fine.
Ive tried checking for MATLAB , FFMPG , OPENCV and AUTOCAD.But none of then help.
Objective : CREATE 3D MODEL => GENERATE MATRIX/ARRAY of colors => Project the matrix onto LED cube.
The generator has to be dynamic such that any changes in to 3d model should reflect on the LED cube.Any well known simulator as such is encouraged.Please count on the above mentioned software if they can do the same, image processing through matlab/opencv is new for me.
Input file : graphic 3D model drawn.
Output file : array of representation of color(any).

Image processing in C — processing a 256-color bitmap image

I am using Borland Turbo C and the Borland Graphics Interface.
I have two questions:
I have to process a 256 color bitmap image. It is difficult to process using EGAVGA driver, so I decided to use SVGA driver. It works fine, but when I convert the image into gray scale, instead of showing only the image in gray scale, the whole window goes into gray scale mode. Is there any method to change the color palette for a specific area using outp(0x03c8, data) and outp(0x03c9, data) functions?
The mouse functions works fine with EGAVGA mode but the cursor is not visible in the SVGA mode. Even the mouse is functional. How could I create a custom mouse cursor for SVGA mode in 256 color? I have the codes for creating custom mouse pointer in EGAVGA mode using 0x10 interrupt but it is not working with SVGA mode?
In paletized video modes, palette entries affect the whole screen. If you change any index, all pixels on screen with that index will change, whether if they belong to your image or not.
If your image is going to share the screen with others, and you want that image the only one that changes into grayscale, you have to set aside some palette entries for exclusive use by your image, so changing them won't affect other graphic elements in your screen.
On Windows, and X-Window if my memory serves well, the entire screen will have the colours of your palette when your window application has the focus. When not, it will revert to system palette and your windows and its contents will show "weird".

Convert YUV to RGB on DeckLink using hardware

I'm currently ingesting HD1080p video at 59.94 FPS from a camcorder via the HDMI input on the DeckLink 4K Extreme.
My goal is to replicate the incoming image in a WPF UI element. To accomplish this I'm using the DeckLink SDK in a C# WPF application.
In this program I've implemented the VideoInputFrameArrived callback. In this callback I'm copying the bytes from each frame into a WriteableBitmap which I've set as the source for an Image.
All this works as it should, and when I run the program, the Image is indeed updated in real time as the frames arrive.
My problem then, is that the only two supported Pixel Formats for the video input are 8BitYUV and 10BitYUV, neither of which can be natively displayed on computer monitors.
The WriteableBitmap can only take in various RGB, Black and White, and CMYK formats.
Here is what I've tried so far.
I've tried to convert each frame using the IDeckLinkVideoConversion::ConvertFrame()
Problem: ConvertFrame() requires a destination frame to be rendered on the DeckLink using IDeckLinkOutput::CreateVideoFrame(). As I currently understand it, the DeckLink cannot act as both an input (to capture the video feed) and an output (to render the destination frame).
I've set the incoming stream to 8BitYUV, and copied each frame into the WriteableBitmap with a format of BGR32.
Problem: As I mentioned earlier, this will display an image, but the color is incorrect and the picture is only half the width that it needs to be.
The reason for this is that the incoming stream of 8BitYUV is 16 bits/pixel, whereas the Bitmap expects 32 bits/pixel, and so the Bitmap treats each incoming MacroPixel (4 bytes) as one pixel instead of the 2 pixels it really is.
Currently I'm using a pixel shader to fix the color and a RenderTransform to scale the Image horizontally by a factor of 2 to "fix" the aspect ratio. The porblem is that I have half of the original resolution.
I don't believe this is a hardware limitation, because when I hook up another monitor to the HDMI output on the DeckLink, the incoming picture displays in full 1080p in perfect color. Would it be possible to capture that outgoing stream somewhere in memory?
TL;DR
What is the best way to convert 4:2:2 YUV (UYVY) into a RGB or CMYK pixel format in real time? (1080p # 59.94 FPS)
Preferably a hardware solution i.e. DeckLink or GPU.
You have several options here.
First of all, you can display UYVY directly. Most video adapters will accept UYVY data through DirectDraw, DirectShow, DirectX versions up to 9 APIs and you won't need a real time conversion for the video frames. Integrating this into WPF application might require some effort, and perhaps the most popular way is to utilize DirectShow through DirectShow.NET library and WPF Media Kit. On this way, however, you could also capture video using DeckLink's video capture DirectShow filter. You could connect all parts together faster, however you already capture using DeckLink SDK and this way you have more control and flexibility on the capture process so you might not want to get back to DirectShow.
Second option is to convert to RGB as you wanted. I don't think DeckLink can do it for you, and GPU based conversion definitely exists (the conversion formula is well known, simple and easy to parallelize), however is hardware dependent or otherwise not immediately available. Instead, Microsoft ships Color Converter DSP which can do the conversion (from 8 bits, not 10 though) in a very efficient way. The API is native, and you might need Media Foundation .NET to access it from your app. An alternate efficient software conversion can also be done using FFmpeg's libswscale (for managed app through respective wrappers).
I just did this with the decklink api because the card I have can act as both inputs and outputs. And the outputs do not need to be in playback mode to access this part of the api:
com_ptr<IDeckLinkOutput> m_deckLinkOutput;
if (SUCCEEDED(m_deckLink->QueryInterface(IID_IDeckLinkOutput, (void **)&m_deckLinkOutput)))
{
IDeckLinkMutableVideoFrame *pRGBFrame;
if (SUCCEEDED(m_deckLinkOutput->CreateVideoFrame(videoFrame->GetWidth(), videoFrame->GetHeight(), videoFrame->GetWidth() * 4, bmdFormat8BitBGRA, videoFrame->GetFlags(), &pRGBFrame)))
{
m_deckLinkVideoConversion->ConvertFrame(pFrame, pRGBFrame);
//use the rgbFrame
pRGBFrame->Release();
}
}

Switch Between Graphic And Text Mode in Turbo C

Guys i am writing a simple graphic program to create a polygon of n sides by taking input from the user.After obtaining co-ordinates of vertices i would ask user to enter the vertex pairs between which he wants an edge.
To make this more interactive i thought i would gradually start drawing the polygon in graphic mode simulatneously i.e i would gradually add the edges and would display it to the user .Now i would again switch to the text mode to obtain further set of vertices between which he wants to insert edges.But what i found that as i switch between graphics and text mode everything which i draw in graphics mode is erased .
Guys is there any way or any function in turbo C compiler so that i could switch between text and graphic mode and at the same time restoring the contents of graphic mode.Should i use different compiler??
Switching between modes makes the video adapter lose all retained graphics. A workaround for this is to use a 'canvas', an in-memory bitmap that stores the pixels. You'd make modifications to this bitmap and blit it to the video adapter to make it visible. Not supported by this ancient graphics library you use. Review the CreateCompatibleDC() winapi function in you plan to get ahead.
This is hardly a problem. Simply re-render the graphics when you switch back to graphics mode. You do have to store a 'model' of the polygon so you can render it. Just store the vertex points.
You could use restorecrtmode(),setmode() and getmode() functions available in TURBO c library.These functions are present in graphics.h header file.
You may stay in graphics mode to get user input, but you will need to create a input function that works in graphic mode, reading char by char (getch()), composing your input and updating the graphic screen with the characters typed. If your graphic card have more then one page, you can use "setactivepage" and "setvisualpage" to create separated pages for the data entry and graphic.

Resources