micrium file system get amount of space left - filesystems

I am wondering if there are any build in functions for the uC/FS to get the amount of memory left on my SD Card with FAT32 format.
Many thanks.
PS. I have looked around but don't seem to be able to find it on the manual.

In order to retrieve the amount of free space on a µC/FS volume, you use the FSVol_Query() function and calculate the amount of free space using the data returned in the FS_VOL_INFO structure.
FS_VOL_INFO vol_info;
CPU_INT64U size_rem;
FS_ERR err;
FSVol_Query("sdcard:0:", &vol_info, err);
if (err != FS_ERR_NONE) {
/* oops, something went wrong, handle error */
}
size_rem = vol_info.VolFreeSecCnt * vol_info.DevSecSize;
Where "sdcard:0:" should be replaced by the volume name of which you'd like to retrieve the amount of free space. The function is documented in section A-7-12 of the user manual.
If using the previous (V3.X) version, check the FS_GetVolumeInfo() function.

Related

Add GPS tags in libtiff

I need to add GPS metadata to TIFF / DNG images using LibTIFF. I know very little about LibTIFF. To be extra annoying, I have to hack this into an existing LuaJIT module with minimal footprint (Edit: to be clear, I don't need an answer in LuaJIT!).
From this question, it seems like I need to set the TIFFTAG_SUBIFD field first, then write the main IFD, then write the GPS IFD. I'm confused on two points:
There's a special LibTIFF name for TIFFTAG_GPSIFD, and I'm not sure where that comes in, or if that's supposed to happen in place of setting TIFFTAG_SUBIFD.
I don't know how to assemble the GPS IFD in the first place. If indeed I'm supposed to set TIFFTAG_SUBIFD and use TIFFWriteDirectory() to close the main IFD, I assume that any following calls to TIFFSetField() will write to the GPS IFD, but there are no LibTIFF names for the fields inside the GPS IFD. That makes me think I'm missing something.
I imagine it's clear that I don't really know what I'm doing with LibTIFF, so any help is appreciated. Feel free to ignore the fact that I'm in LuaJIT and just tell me what this might look like in C.
For reference, this question seems to accomplish the same task in Java, but I don't understand it well enough to translate.
Finally, here's a general idea of what I have now, but this is obviously incomplete. the "v_int" style functions are just helpers to return the appropriate C variable from ffi.new().
local fdt = tiff.TIFFOpen(ofname, 'w')
-- Set all the fields in the main IFD here
-- ...
if self.scfg.gps then
-- Prepare libtiff for a subIFD
-- https://stackoverflow.com/questions/11959617/in-a-tiff-create-a-sub-ifd-with-thumbnail-libtiff#11998400
subifd_n = v_int(1)
subifd_offsets = ffi.new(string.format('toff_t[%i]', subifd_n), {v_uint32(0)})
tiff.TIFFSetField(fdt, C.TIFFTAG_SUBIFD, v_int(subifd_n), subifd_offsets)
end
local bytes_per_pixel = self.sample_depth / 8
tiff.TIFFWriteRawStrip(fdt, 0, data_raw, self.frame_height * self.frame_width * bytes_per_pixel)
if self.scfg.gps then
tiff.TIFFWriteDirectory(fdt) -- Finish main IFD
b2 = v_unit8(2)
b0 = v_uint8(0)
tiff.TIFFSetField(fdt, 0, b2, b2, b0, b0) -- GPSVersionID
end
tiff.TIFFClose(fdt) -- Closes currently-open IFD (?)
Any help is very much appreciated!

How to know which address space a buffer head is mapped to?

In the jbd2 source code, any modification in the File System is mapped into a handle_t structure (per process) that later is used to map the buffer_head to the transaction_t which this handle is going to be part of.
As far as I could understand, when a modification to a given buffer_head is needed, then a call to do_get_write_access() is going to map this buffer_head to the transaction that the handle_t is being part of.
However, when this handle_t is used to map the buffer_head to the transaction_t, the reciprocal mapping is lost, that is, I cannot track back to which handle_t this buffer_head belonged.
The thing is that, during the jbd2_journal_commit_transaction() (commit phase 2b in commit function) I want to find a way to walk through these buffer_heads and be able to classify them if they are related to an inode, or to a metadata, or to a inode bitmap block, or an data bitmap block, for example. Furthermore, at this point in the source code, the buffer_heads seems to be opaque, where they are simply sent to the storage.
UPDATE 1:
What I have tried so far was this, in the jbd2_journal_commit_transaction() function, in the commit phase 2b.
struct journal_head *jh;
...
jh = commit_transaction->t_buffers;
if(jh->b_jlist == BJ_Metadata) {
struct buffer_head *bh_p = NULL;
bh_p = jh2bh(jh);
if(!bh_p) printk(KERN_DEBUG "Null ptr in bh_p\n");
else {
struct address_space *as_p = NULL;
if((as_p = bh_p->b_assoc_map) == NULL)
printk(KERN_DEBUG "Null ptr in as_p\n");
else {
struct inode *i_p = NULL;
if(i_p) printk(KERN_DEBUG "Inode is %lu\n", i_p->i_ino);
}
}
}
It is not working, it is giving NULL ptr in the as_p, that is, there is no b_assoc_map set for this buffer_head. But, I have no idea what is the b_assoc_map.
UPDATE 2:
I am trying to get the information from the handle_t structure at ext4_mark_iloc_dirty. handle_t->h_type has the information I need. However, when I try to compare this value, a NULL pointer is causing a kernel warning. I thought this structure is unique per process, but seems like it is having some race condition, I don't know clearly yet.
After looking through all the source code path related to this issue, I conclude that there is no way to do it without changing anything.
Basically, the handle_t structure has the information about the transaction. Later, when some modification is going to be done in a given buffer_head, the jbd2_journal_get_write_access(handle, bh) is called to get the write access to the specified buffer.
Inside jbd2_journal_get_write_access the journal_head structure is created, and then it is going to point to this buffer_head, however, at this point there is no relation between handle_t.
Next step, after returning from jbd2_journal_add_journal_head, a call to do_get_write_access(handle, bh) is made, and here the journal_head is initialized with the information passed by the handle_t.
After this step, where the handle_t is used to initialize the journal_head, then the handle_t is not necessary anymore.
Up to here, everything is initialized, now we can move to the commit point.
In jbd2_journal_commit_transaction, at commit phase 2b the buffer_heads belonging to the committing transaction are going to be iterated, and committed.
Because the only information attached to the buffer_head is the journal_head, and the journal_head does not contain the necessary information to distinguish what kind of buffer_head is it, then I conclude that it is not possible to reach what I want without modifying the source code.
My solution was to add a new member to store the inode number in the handle_t, and also in journal_head structure. So, when the do_get_write_access() call is made, I can filter the operation like this:
if(handle->h_ino)
jh->b_ino = handle->h_ino;
So, I had to modify handle_t to transport the inode number to journal_head, and at commit time I can get the required information that I want.

Rewriting cpufreq_frequency_table initialization for legacy cpufreq_driver

long time listener, first time caller.
I've been backporting features from upstream code as recent as 4.12-rc-whatever to a 3.4-base kernel for an older Qualcomm SoC board (apq8064, ridiculous undertaking I know).
Thus far I've been successful in almost every core api, with any compatibility issues solved by creative shims and ducttape, with the exception of cpufreq.
Keep in mind that I'm still using legacy platform drivers and clocking, no dt's or common clock frame work.
My issue begins with the inclusion of stuct cpufreq_frequency_table into struct cpufreq_policy, as part of the move from percpu to per-policy in the api. In 3.13, registering a platform's freq_table becomes more difficult for unique cases, as using cpufreq_frequency_table_get_attr is no longer an option.
In my case, the cpufreq_driver's init is generic, and relies on my platform's scaling driver (acpuclock-krait) to register the freq_table, which is fine for the older api, but becomes incompatible with the per-policy setup. The upstream so I requires the driver to manually initialize policy->freq_table and mine uses both a cpu, and an array of 35 representing the tables in the platform code. As well, it accounts for the 6 different speedbin/pvs values when choosing a table. I'm considering either dropping the "cpu" param from it and using cpumask_copy, and perhaps even combining the two drivers into one and making the clock driver a probe, but yeah, thus far init is a mystery for me. Here is the snippet of my table registration, if anyone can think of something hackable, I'd be eternally grateful...
ifdef CONFIG_CPU_FREQ_MSM
static struct cpufreq_frequency_table.freq_table[NR_CPUS][35];
extern int console_batt_stat;
static void __init cpufreq_table_init(void)
{
int cpu;
int freq_cnt = 0;
for_each_possible_cpu(cpu) {
int i;
/* Construct the freq_table tables from acpu_freq_tbl. */
for (i = 0, freq_cnt = 0; drv.acpu_freq_tbl[i].speed.khz != 0
&& freq_cnt < ARRAY_SIZE(*freq_table)-1; i++) {
if (drv.acpu_freq_tbl[i].use_for_scaling) {
freq_table[cpu][freq_cnt].index = freq_cnt;
freq_table[cpu][freq_cnt].frequency
= drv.acpu_freq_tbl[i].speed.khz;
freq_cnt++;
}
}
/* freq_table not big enough to store all usable freqs. */
BUG_ON(drv.acpu_freq_tbl[i].speed.khz != 0);
freq_table[cpu][freq_cnt].index = freq_cnt;
freq_table[cpu][freq_cnt].frequency = CPUFREQ_TABLE_END;
/* Register table with CPUFreq. */
cpufreq_frequency_table_get_attr(freq_table[cpu], cpu);
}
dev_info(drv.dev, "CPU Frequencies Supported: %d\n", freq_cnt);
}
UPDATE!!! I wanted to update the initial registration BEFORE merging all the core changes back in, and am pretty certain that I've done so. Previously, the array in question referenced a percpu dummy array that looked like this: freq_table[NR_CPUS][35] that required the cpu parameter to be listed as part of the table. I've made some changes here that allows me a percpu setup AND the platform-specific freq management( which cpufreq doesn't need to see), but with a dummy table representing the "index," which cpufreq does need to see. Commit is here, next one fixed obvious mistakes: https://github.com/robcore/machinex/commit/59d7e5307104c2396a2e4c2a5e0b07f950dea10f

Decoder return of av_find_best_stream vs. avcodec_find_decoder

The docs for libav's av_find_best_stream function (libav 11.7, Windows, i686, GPL) specify a parameter that can be used to receive a pointer to an appropriate AVCodec:
decoder_ret - if non-NULL, returns the decoder for the selected stream
There is also the avcodec_find_decoder function which can find an AVCodec given an ID.
However, the official demuxing + decoding example uses av_find_best_stream to find a stream, but chooses to use avcodec_find_decoder to find the codec in lieu of av_find_best_stream's codec return parameter:
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
...
stream_index = ret;
st = fmt_ctx->streams[stream_index];
...
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
As opposed to something like:
ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);
My question is pretty straightforward: Is there a difference between using av_find_best_stream's return parameter vs. using avcodec_find_decoder to find the AVCodec?
The reason I ask is because the example chose to use avcodec_find_decoder rather than the seemingly more convenient return parameter, and I can't tell if the example did that for a specific reason or not. The documentation itself is a little spotty and disjoint, so it's hard to tell if things like this are done for a specific important reason or not. I can't tell if the example is implying that it "should" be done that way, or if the example author did it for some more arbitrary personal reason.
av_find_best_stream uses avcodec_find_decoder internally in pretty much the same way as in your code sample. However there is a change in av_find_best_stream behaviour when decoder is requested from it - namely, it will try to use avcodec_find_decoder on each candidate stream and if it fails then it will discard the candidate and move on to the next one. In the end it will return best stream together with its decoder. If decoder is not requested, it will just return best stream without checking if it can be decoded.
So if you just want to get single video/audio stream and you are not going to write some custom stream selection logic then I'd say there's no downside to using av_find_best_stream to get a decoder.

C (Windows) - GPU usage (load %)

According to many sources on the Internet its possible to get GPU usage (load) using D3DKMTQueryStatistics.
How to query GPU Usage in DirectX?
I've succeded to get memory information using code from here with slight modifications:
http://processhacker.sourceforge.net/forums/viewtopic.php?t=325#p1338
However I didn't find a member of D3DKMT_QUERYSTATISTICS structure that should carry information regarding GPU usage.
Look at the EtpUpdateNodeInformation function in gpumon.c. It queries for process statistic per GPU node. There can be several processing nodes per graphics card:
queryStatistics.Type = D3DKMT_QUERYSTATISTICS_PROCESS_NODE
...
totalRunningTime += queryStatistics.QueryResult.ProcessNodeInformation.RunningTime.QuadPart
...
PhUpdateDelta(&Block->GpuRunningTimeDelta, totalRunningTime);
...
block->GpuNodeUsage = (FLOAT)(block->GpuRunningTimeDelta.Delta / (elapsedTime * EtGpuNodeBitMapBitsSet));
It gathers process running time and divides by actual time span.

Resources