Convert ID3D11Texture2D into a memory buffer - directx-9

How can we convert ID3D11Texture2D into a memory buffer? I have ID3D11Texture2D* and need to read data from it to a memory buffer.

You need to create a second texture with the same format/size, but create it as staging.
Texture description
Cpu access flags needs to be set to D3D11_CPU_ACCESS_READ
and usage needs to be set to D3D11_USAGE_STAGING.
Then you can call ID3D11DeviceContext::CopyResource to copy from the texture to the staging one.
And then you call Map to access data.

Related

Read entire file bytes at once using intersystems' Cache?

I have a file of bytes 1.5GB in size {filebyte}. I want to read the entire file in one instance instance similar to Delphi's
bytedata:=filebyte.readallbytes(filename);
The result being that in one instance you will have a bytearray with the number of elements being high(bytedata)-low(bytedata)+1. Is there equivalent code in Cache. Can a file of 1.5G in size be held in memory in cache.
I do not want to read the file in blocks as the operation to analyse the data requires that the whole file be in memory at one time.
Thanks
You can read from the stream as many data as you need. The problem is here, how much you can store in a local variable.
set fs=##class(%Stream.FileCharacter).%New()
set fs.Filename="c:\test.txt"
set length=fs.Size
set data=fs.Read(length) \\ if size no more than 3.5Mb
Local variable size limited by 3,641,144 bytes or 32,767 bytes of long strings diabled. And up to 2012.1 memory per process was limited by 48mbytes. And in 2012.2 it was changed and it is possible to set up to 2 terabytes per process, and in real time programmatically just for a current process with special variable $zstorage.

Pipelining a set of C buffers

I am creating Ethernet packets in an embedded system. I have my Data / IP and UDP packet headers defined in pre-allocated buffers and I have a large buffer that is used to grab data from the FPGA's fabric using DMA.
I also have some user data headers and footers where the data comes from the fabric in other ways, mostly SPI transfer of temperature, PCB address etc. Or even grabs of some of the configuration registers (single transaction, on-boot).
Now, at the moment I concatenate these using memcpy into a new larger buffer (also pre-allocated), and then send to the Transmit buffer of the on-FPGA MAC.
My issues:
1) All these buffers are on the FPGA hence requiring memory, I could copy them one at a time into the MAC Tx buffer but this would prevent my second idea.
2) All being buffers, gives the possibility of forming a pipeline, where new data (DN+1) can be put into the first buffers, while subsequent buffers are storing and concatenating the data of (DN+0).
If I have a nice modularised code, how do I create a pipeline from buffer to buffer. In hardware I'd use flags, only passing data from Buffer A to B when Buffer B has finished passing its data to C. In terms of C, memcpy and memmove return only void, I'd therefore need to make my own boolean flag that is modified after memcpy finishes and I'd need to make these flags globals so that I can easily pass their status into other functions.
Finally, as this is embedded, I don't have access to the full C libraries and both time and memory are at a premium.
Thanks
Ed

How to memory map data already in memory to a file

I am working on a program which needs to load up to a few hundred images into memory at once. Each file takes up 100mb so I don't really want to be storing all of them in memory. I want to memory map the files so the operating system will swap them out when necessary to save physical memory. Here is what I am wondering. If I already have the data I want in the file in malloced memory should I open a file descriptor, write the data to the file using write() and then map the file. Or can I memory map a new file and then copy the data using memcpy. If I were to create a new file and when I call mmap give it a length large than the file size will it just increase the size of the file on the disk?
From the POSIX standard: “The mmap() function can be used to map a region of memory that is larger than the current size of the object. Memory access within the mapping but beyond the current end of the underlying objects may result in SIGBUS signals being sent to the process.” (http://pubs.opengroup.org/onlinepubs/9699919799/)
That said, you could try mmap() with MAP_FIXED over the same memory region you just wrote from, if you got it from a page-aligned aligned_alloc() rather than malloc(), or free and then mmap(). But note that the OS will page memory you haven’t used for a while to swap anyway, and you can help it out with posix_madvise().

Fastest way to display a screen buffer captured from other PC

My problem is the following:
I have a pointer that stores a framebuffer which is constantly changed by some thread.
I want to display this framebuffer via OpenGL APIs. My trivial choice is to use glTexImage2D and load the framebuffer again and again at every time. This loading over the framebuffer is necessary because the framebuffer is changed by outside of the OpenGL APIs. I think that there could be some methods or tricks to speed up such as:
By finding out the changes inside the framebuffer (Is it even possible?)
Some method for fast re-loading of image
OpenGL could directly using the pointer of the framebuffer? (So less framebuffer copying)
I'm not sure the above approaches are valid or not. I hope if you could give some advices.
By finding out the changes inside the framebuffer (Is it even possible?)
That would reduce the required bandwidth, because it's effectively video compression. However the CPU load is notably higher and its much slower than to just DMA copy the data.
Some method for fast re-loading of image
Use glTexSubImage2D instead glTexImage2D (note the …Sub…). glTexImage2D goes through a full texture initialization each time its called, which is rather costly.
If your program does additional things with OpenGL rather than just displaying the image, you can speed things up further, by reducing the time the program is waiting for things to complete by using Pixel Buffer Objects. The essential gist is
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
void *pbuf = glMapBuffer();
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
thread image_load_thread = start_thread_copy_image_to_pbo(image, pbuf);
do_other_things_with_opengl();
join_thread(image_load_thread();
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
glUnmapBuffer();
glBindTexture(…);
glTexSubImage2D(…, NULL); /* will read from PBO */
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
draw_textured_quad();
Instead of creating and joining with the thread you may as well use a thread pool and condition variables for inter thread synchronization.
OpenGL could directly using the pointer of the framebuffer? (So less framebuffer copying)
Stuff almost always has to be copied around. Don't worry about copies, if they are necessary anyway. The Pixel Buffer Objects I outlined above may or may not work with a shader copy in system RAM. Essentially you can glBufferMap into your process' address space and directly decode into that buffer you're given. But that's no guarantee that a further copy is avoided.

How to make disk-based buffer just like memory?

I have seen a similar question on this site, but there is no helpful answer.
Scenario:
Following is the data transmission process ,
embedded devices-------->buffer-------->AWS(Cloud Storage)
Conditions:
Owing to the limit of embedded device, there is not enough memory to store the data.
My idea:
Using mmap() to allocate "memory" on disk, and manage the data relay on another lib, which is a opensource lib on github.
Problem:
However, I discover it just now that the it will occupy memory in the real memory. This method seems cannot solve my condition.
What's your idea ? Buddy...
All mmmap(2) does is to avoid an extra data copy operation between the user-space application's buffer and a kernel holding buffer. The portion of the real file which is mapped becomes part of the application's virtual address space and occupies physical memory in the block cache, even if you are using an anonymous map (a map without a backing file, the fd arg is set to -1).
So, by moving the mmap(2) window you can gain direct access to the kernel's buffer cache holding the file data. Use a 4K map window to correspond to the virtual memory map hardware feature and your file can be arbitrary size but only use a 4K map window into the file.
The good thing about mmap(2) is that you can open the file, create the mmap(2) window, and then close the file. Now you can access the file data using loads/stores treating the mapped window as a data array object.

Resources