C Library for compressing sequential positive integers - c
I have the very common problem of creating an index for an in-disk array of strings. In short, I need to store the position of each string in the in-disk representation. For example, a very naive solution would be an index array as follows:
uint64 idx[] = { 0, 20, 500, 1024, ..., 103434 };
Which says that the first string is at position 0, the second at position 20, the third at position 500 and the nth at position 103434.
The positions are always non-negative 64 bits integers in sequential order. Although the numbers could vary by any difference, in practice I expect the typical difference to be inside the range from 2^8 to 2^20. I expect this index to be mmap'ed in memory, and the positions will be accessed randomly (assume uniform distribution).
I was thinking about writing my own code for doing some sort of block delta encoding or other more sophisticated encoding, but there are so many different trade-offs between encoding/decoding speed and space that I would rather get a working library as a starting point and maybe even settle for something without any customizations.
Any hints? A c library would be ideal, but a c++ one would also allow me to run some initial benchmarks.
A few more details if you are still following. This will be used to build a library similar to cdb (http://cr.yp.to/cdb/cdbmake.html) on top the library cmph (http://cmph.sf.net). In short, it is for a large disk based read only associative map with a small index in memory.
Since it is a library, I don't have control over input, but the typical use case that I want to optimize have millions of hundreds of values, average value size in the few kilobytes ranges and maximum value at 2^31.
For the record, if I don't find a library ready to use I intend to implement delta encoding in blocks of 64 integers with the initial bytes specifying the block offset so far. The blocks themselves would be indexed with a tree, giving me O(log (n/64)) access time. There are way too many other options and I would prefer to not discuss them. I am really looking forward ready to use code rather than ideas on how to implement the encoding. I will be glad to share with everyone what I did once I have it working.
I appreciate your help and let me know if you have any doubts.
I use fastbit (Kesheng Wu LBL.GOV), it seems you need something good, fast and NOW, so fastbit is a highly competient improvement on Oracle's BBC (byte aligned bitmap code, berkeleydb). It's easy to setup and very good gernally.
However, given more time, you may want to look at a gray code solution, it seems optimal for your purposes.
Daniel Lemire has a number of libraries for C/++/Java released on code.google, I've read over some of his papers and they are quite nice, several advancements on fastbit and alternative approaches for column re-ordering with permutated grey codes's.
Almost forgot, I also came across Tokyo Cabinet, though I do not think it will be well suited for my current project, I may of considered it more if I had known about it before ;), it has a large degree of interoperability,
Tokyo Cabinet is written in the C
language, and provided as API of C,
Perl, Ruby, Java, and Lua. Tokyo
Cabinet is available on platforms
which have API conforming to C99 and
POSIX.
As you referred to CDB, the TC benchmark has a TC mode (TC support's several operational constraint's for varying perf) where it surpassed CDB by 10 times for read performance and 2 times for write.
With respect to your delta encoding requirement, I am quite confident in bsdiff and it's ability to out-perform any file.exe content patching system, it may also have some fundimental interfaces for your general needs.
Google's new binary compression application, courgette may be worth checking out, in case you missed the press release, 10x smaller diff's than bsdiff in the one test case I have seen published.
You have two conflicting requirements:
You want to compress very small items (8 bytes each).
You need efficient random access for each item.
The second requirement is very likely to impose a fixed length for each item.
What exactly are you trying to compress? If you are thinking about the total space of index, is it really worth the effort to save the space?
If so one thing you could try is to chop the space into half and store it into two tables. First stores (upper uint, start index, length, pointer to second table) and the second would store (index, lower uint).
For fast searching, indices would be implemented using something like B+ Tree.
I did something similar years ago for a full-text search engine. In my case, each indexed word generated a record which consisted of a record number (document id) and a word number (it could just as easily have stored word offsets) which needed to be compressed as much as possible. I used a delta-compression technique which took advantage of the fact that there would be a number of occurrences of the same word within a document, so the record number often did not need to be repeated at all. And the word offset delta would often fit within one or two bytes. Here is the code I used.
Since it's in C++, the code may is not going to be useful to you as is, but can be a good starting point for writing compressions routines.
Please excuse the hungarian notation and the magic numbers strewn within the code. Like I said, I wrote this many years ago :-)
IndexCompressor.h
//
// index compressor class
//
#pragma once
#include "File.h"
const int IC_BUFFER_SIZE = 8192;
//
// index compressor
//
class IndexCompressor
{
private :
File *m_pFile;
WA_DWORD m_dwRecNo;
WA_DWORD m_dwWordNo;
WA_DWORD m_dwRecordCount;
WA_DWORD m_dwHitCount;
WA_BYTE m_byBuffer[IC_BUFFER_SIZE];
WA_DWORD m_dwBytes;
bool m_bDebugDump;
void FlushBuffer(void);
public :
IndexCompressor(void) { m_pFile = 0; m_bDebugDump = false; }
~IndexCompressor(void) {}
void Attach(File& File) { m_pFile = &File; }
void Begin(void);
void Add(WA_DWORD dwRecNo, WA_DWORD dwWordNo);
void End(void);
WA_DWORD GetRecordCount(void) { return m_dwRecordCount; }
WA_DWORD GetHitCount(void) { return m_dwHitCount; }
void DebugDump(void) { m_bDebugDump = true; }
};
IndexCompressor.cpp
//
// index compressor class
//
#include "stdafx.h"
#include "IndexCompressor.h"
void IndexCompressor::FlushBuffer(void)
{
ASSERT(m_pFile != 0);
if (m_dwBytes > 0)
{
m_pFile->Write(m_byBuffer, m_dwBytes);
m_dwBytes = 0;
}
}
void IndexCompressor::Begin(void)
{
ASSERT(m_pFile != 0);
m_dwRecNo = m_dwWordNo = m_dwRecordCount = m_dwHitCount = 0;
m_dwBytes = 0;
}
void IndexCompressor::Add(WA_DWORD dwRecNo, WA_DWORD dwWordNo)
{
ASSERT(m_pFile != 0);
WA_BYTE buffer[16];
int nbytes = 1;
ASSERT(dwRecNo >= m_dwRecNo);
if (dwRecNo != m_dwRecNo)
m_dwWordNo = 0;
if (m_dwRecordCount == 0 || dwRecNo != m_dwRecNo)
++m_dwRecordCount;
++m_dwHitCount;
WA_DWORD dwRecNoDelta = dwRecNo - m_dwRecNo;
WA_DWORD dwWordNoDelta = dwWordNo - m_dwWordNo;
if (m_bDebugDump)
{
TRACE("%8X[%8X] %8X[%8X] : ", dwRecNo, dwRecNoDelta, dwWordNo, dwWordNoDelta);
}
// 1WWWWWWW
if (dwRecNoDelta == 0 && dwWordNoDelta < 128)
{
buffer[0] = 0x80 | WA_BYTE(dwWordNoDelta);
}
// 01WWWWWW WWWWWWWW
else if (dwRecNoDelta == 0 && dwWordNoDelta < 16384)
{
buffer[0] = 0x40 | WA_BYTE(dwWordNoDelta >> 8);
buffer[1] = WA_BYTE(dwWordNoDelta & 0x00ff);
nbytes += sizeof(WA_BYTE);
}
// 001RRRRR WWWWWWWW WWWWWWWW
else if (dwRecNoDelta < 32 && dwWordNoDelta < 65536)
{
buffer[0] = 0x20 | WA_BYTE(dwRecNoDelta);
WA_WORD *p = (WA_WORD *) (buffer+1);
*p = WA_WORD(dwWordNoDelta);
nbytes += sizeof(WA_WORD);
}
else
{
// 0001rrww
buffer[0] = 0x10;
// encode recno
if (dwRecNoDelta < 256)
{
buffer[nbytes] = WA_BYTE(dwRecNoDelta);
nbytes += sizeof(WA_BYTE);
}
else if (dwRecNoDelta < 65536)
{
buffer[0] |= 0x04;
WA_WORD *p = (WA_WORD *) (buffer+nbytes);
*p = WA_WORD(dwRecNoDelta);
nbytes += sizeof(WA_WORD);
}
else
{
buffer[0] |= 0x08;
WA_DWORD *p = (WA_DWORD *) (buffer+nbytes);
*p = dwRecNoDelta;
nbytes += sizeof(WA_DWORD);
}
// encode wordno
if (dwWordNoDelta < 256)
{
buffer[nbytes] = WA_BYTE(dwWordNoDelta);
nbytes += sizeof(WA_BYTE);
}
else if (dwWordNoDelta < 65536)
{
buffer[0] |= 0x01;
WA_WORD *p = (WA_WORD *) (buffer+nbytes);
*p = WA_WORD(dwWordNoDelta);
nbytes += sizeof(WA_WORD);
}
else
{
buffer[0] |= 0x02;
WA_DWORD *p = (WA_DWORD *) (buffer+nbytes);
*p = dwWordNoDelta;
nbytes += sizeof(WA_DWORD);
}
}
// update current setting
m_dwRecNo = dwRecNo;
m_dwWordNo = dwWordNo;
// add compressed data to buffer
ASSERT(buffer[0] != 0);
ASSERT(nbytes > 0 && nbytes < 10);
if (m_dwBytes + nbytes > IC_BUFFER_SIZE)
FlushBuffer();
CopyMemory(m_byBuffer + m_dwBytes, buffer, nbytes);
m_dwBytes += nbytes;
if (m_bDebugDump)
{
for (int i = 0; i < nbytes; ++i)
TRACE("%02X ", buffer[i]);
TRACE("\n");
}
}
void IndexCompressor::End(void)
{
FlushBuffer();
m_pFile->Write(WA_BYTE(0));
}
You've omitted critical information about the number of strings you intend to index.
But given that you say you expect the minimum length of an indexed string to be 256, storing the indices as 64% incurs at most 3% overhead. If the total length of the string file is less than 4GB, you could use 32-bit indices and incur 1.5% overhead. These numbers suggest to me that if compression matters, you're better off compressing the strings, not the indices. For that problem a variation on LZ77 seems in order.
If you want to try a wild idea, put each string in a separate file, pull them all into a zip file, and see how you can do with zziplib. This probably won't be great, but it's nearly zero work on your part.
More data on the problem would be welcome:
Number of strings
Average length of a string
Maximum length of a string
Median length of strings
Degree to which the strings file compresses with gzip
Whether you are allowed to change the order of strings to improve compression
EDIT
The comment and revised question makes the problem much clearer. I like your idea of grouping, and I would try a simple delta encoding, group the deltas, and use a variable-length code within each group. I wouldn't wire in 64 as the group size–I think you will probably want to determine that empirically.
You asked for existing libraries. For the grouping and delta encoding I doubt you will find much. For variable-length integer codes, I'm not seeing much in the way of C libraries, but you can find variable-length codings in Perl and Python. There are a ton of papers and some patents on this topic, and I suspect you're going to wind up having to roll your own. But there are some simple codes out there, and you could give UTF-8 a try—it can code unsigned integers up to 32 bits, and you can grab C code from Plan 9 and I'm sure many other sources.
Are you running on Windows? If so, I recommend creating the mmap file using naive solution your originally proposed, and then compressing the file using NTLM compression. Your application code never knows the file is compressed, and the OS does the file compression for you. You might not think this would be very performant or get good compression, but I think you'll be surprised if you try it.
Related
Decoding TIFF LZW codes not yet in the dictionary
I made a decoder of LZW-compressed TIFF images, and all the parts work, it can decode large images at various bit depths with or without horizontal prediction, except in one case. While it decodes files written by most programs (like Photoshop and Krita with various encoding options) fine, there's something very strange about the files created by ImageMagick's convert, it produces LZW codes that aren't yet in the dictionary, and I don't know how to handle it. Most of the time the 9 to 12-bit code in the LZW stream that isn't yet in the dictionary is the next one that my decoding algorithm will try to put in the dictionary (which I'm not sure should be a problem although my algorithm fails on an image that contains such cases), but at times it can even be hundreds of codes into the future. In one case the first code after the clear code (256) is 364, which seems quite impossible given that the clear code clears my dictionary of all codes 258 and above, in another case the code is 501 when my dictionary only goes up to 317! I have no idea how to deal with it, but it seems that I'm the only one with this problem, the decoders in other programs load such images fine. So how do they do it? Here's the core of my decoding algorithm, obviously due to how much code is involved I can't provide complete compilable code in a compact manner, but since this is a matter of algorithmic logic this should be enough. It follows closely the algorithm described in the official TIFF specification (page 61), in fact most of the spec's pseudo code is in the comments. void tiff_lzw_decode(uint8_t *coded, buffer_t *dec) { buffer_t word={0}, outstring={0}; size_t coded_pos; // position in bits int i, new_index, code, maxcode, bpc; buffer_t *dict={0}; size_t dict_as=0; bpc = 9; // starts with 9 bits per code, increases later tiff_lzw_calc_maxcode(bpc, &maxcode); new_index = 258; // index at which new dict entries begin coded_pos = 0; // bit position lzw_dict_init(&dict, &dict_as); while ((code = get_bits_in_stream(coded, coded_pos, bpc)) != 257) // while ((Code = GetNextCode()) != EoiCode) { coded_pos += bpc; if (code >= new_index) printf("Out of range code %d (new_index %d)\n", code, new_index); if (code == 256) // if (Code == ClearCode) { lzw_dict_init(&dict, &dict_as); // InitializeTable(); bpc = 9; tiff_lzw_calc_maxcode(bpc, &maxcode); new_index = 258; code = get_bits_in_stream(coded, coded_pos, bpc); // Code = GetNextCode(); coded_pos += bpc; if (code == 257) // if (Code == EoiCode) break; append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code)); clear_buf(&word); append_buf(&word, &dict[code]); // OldCode = Code; } else if (code < 4096) { if (dict[code].len) // if (IsInTable(Code)) { append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code)); lzw_add_to_dict(&dict, &dict_as, new_index, 0, word.buf, word.len, &bpc); lzw_add_to_dict(&dict, &dict_as, new_index, 1, dict[code].buf, 1, &bpc); // AddStringToTable new_index++; tiff_lzw_calc_bpc(new_index, &bpc, &maxcode); clear_buf(&word); append_buf(&word, &dict[code]); // OldCode = Code; } else { clear_buf(&outstring); append_buf(&outstring, &word); bufwrite(&outstring, word.buf, 1); // OutString = StringFromCode(OldCode) + FirstChar(StringFromCode(OldCode)); append_buf(dec, &outstring); // WriteString(OutString); lzw_add_to_dict(&dict, &dict_as, new_index, 0, outstring.buf, outstring.len, &bpc); // AddStringToTable new_index++; tiff_lzw_calc_bpc(new_index, &bpc, &maxcode); clear_buf(&word); append_buf(&word, &dict[code]); // OldCode = Code; } } } free_buf(&word); free_buf(&outstring); for (i=0; i < dict_as; i++) free_buf(&dict[i]); free(dict); } As for the results that my code produces in such situations it's quite clear from how it looks that it's only those few codes that are badly decoded, everything before and after is properly decoded, but obviously in most cases the subsequent image after one of these mystery future codes is ruined by virtue of shifting the rest of the decoded bytes by a few places. That means that my reading of the 9 to 12-bit code stream is correct, so this really means that I see a 364 code right after a 256 dictionary-clearing code. Edit: Here's an example file that contains such weird codes. I've also found a small TIFF LZW loading library that suffers from the same problem, it crashes where my loader finds the first weird code in this image (code 3073 when the dictionary only goes up to 2051). The good thing is that since it's a small library you can test it with the following code: #include "loadtiff.h" #include "loadtiff.c" void loadtiff_test(char *path) { int width, height, format; floadtiff(fopen(path, "rb"), &width, &height, &format); } And if anyone insists on diving into my code (which should be unnecessary, and it's a big library) here's where to start.
The bogus codes come from trying to decode more than we're supposed to. The problem is that a LZW strip may sometimes not end with an End-of-Information 257 code, so the decoding loop has to stop when a certain number of decoded bytes have been output. That number of bytes per strip is determined by the TIFF tags ROWSPERSTRIP * IMAGEWIDTH * BITSPERSAMPLE / 8, and if PLANARCONFIG is 1 (which means interleaved channels as opposed to planar), by multiplying it all by SAMPLESPERPIXEL. So on top of stopping the decoding loop when a code 257 is encountered the loop must also be stopped after that count of decoded bytes has been reached.
Reading serial port faster
I have a computer software that sends RGB color codes to Arduino using USB. It works fine when they are sent slowly but when tens of them are sent every second it freaks out. What I think happens is that the Arduino serial buffer fills out so quickly that the processor can't handle it the way I'm reading it. #define INPUT_SIZE 11 void loop() { if(Serial.available()) { char input[INPUT_SIZE + 1]; byte size = Serial.readBytes(input, INPUT_SIZE); input[size] = 0; int channelNumber = 0; char* channel = strtok(input, " "); while(channel != 0) { color[channelNumber] = atoi(channel); channel = strtok(0, " "); channelNumber++; } setColor(color); } } For example the computer might send 255 0 123 where the numbers are separated by space. This works fine when the sending interval is slow enough or the buffer is always filled with only one color code, for example 255 255 255 which is 11 bytes (INPUT_SIZE). However if a color code is not 11 bytes long and a second code is sent immediately, the code still reads 11 bytes from the serial buffer and starts combining the colors and messes them up. How do I avoid this but keep it as efficient as possible?
It is not a matter of reading the serial port faster, it is a matter of not reading a fixed block of 11 characters when the input data has variable length. You are telling it to read until 11 characters are received or the timeout occurs, but if the first group is fewer than 11 characters, and a second group follows immediately there will be no timeout, and you will partially read the second group. You seem to understand that, so I am not sure how you conclude that "reading faster" will help. Using your existing data encoding of ASCII decimal space delimited triplets, one solution would be to read the input one character at a time until the entire triplet were read, however you could more simply use the Arduino ReadBytesUntil() function: #define INPUT_SIZE 3 void loop() { if (Serial.available()) { char rgb_str[3][INPUT_SIZE+1] = {{0},{0},{0}}; Serial.readBytesUntil( " ", rgb_str[0], INPUT_SIZE ); Serial.readBytesUntil( " ", rgb_str[1], INPUT_SIZE ); Serial.readBytesUntil( " ", rgb_str[2], INPUT_SIZE ); for( int channelNumber = 0; channelNumber < 3; channelNumber++) { color[channelNumber] = atoi(channel); } setColor(color); } } Note that this solution does not require the somewhat heavyweight strtok() processing since the Stream class has done the delimiting work for you. However there is a simpler and even more efficient solution. In your solution you are sending ASCII decimal strings then requiring the Arduino to spend CPU cycles needlessly extracting the fields and converting to integer values, when you could simply send the byte values directly - leaving if necessary the vastly more powerful PC to do any necessary processing to pack the data thus. Then the code might be simply: void loop() { if( Serial.available() ) { for( int channelNumber = 0; channelNumber < 3; channelNumber++) { color[channelNumber] = Serial.Read() ; } setColor(color); } } Note that I have not tested any of above code, and the Arduino documentation is lacking in some cases with respect to descriptions of return values for example. You may need to tweak the code somewhat. Neither of the above solve the synchronisation problem - i.e. when the colour values are streaming, how do you know which is the start of an RGB triplet? You have to rely on getting the first field value and maintaining count and sync thereafter - which is fine until perhaps the Arduino is started after data stream starts, or is reset, or the PC process is terminated and restarted asynchronously. However that was a problem too with your original implementation, so perhaps a problem to be dealt with elsewhere.
First of all, I agree with #Thomas Padron-McCarthy. Sending character string instead of a byte array(11 bytes instead of 3 bytes, and the parsing process) is wouldsimply be waste of resources. On the other hand, the approach you should follow depends on your sender: Is it periodic or not Is is fixed size or not If it's periodic you can check in the time period of the messages. If not, you need to check the messages before the buffer is full. If you think printable encoding is not suitable for you somehow; In any case i would add an checksum to the message. Let's say you have fixed size message structure: typedef struct MyMessage { // unsigned char id; // id of a message maybe? unsigned char colors[3]; // or unsigned char r,g,b; //maybe unsigned char checksum; // more than one byte could be a more powerful checksum }; unsigned char calcCheckSum(struct MyMessage msg) { //... } unsigned int validateCheckSum(struct MyMessage msg) { //... if(valid) return 1; else return 0; } Now, you should check every 4 byte (the size of MyMessage) in a sliding window fashion if it is valid or not: void findMessages( ) { struct MyMessage* msg; byte size = Serial.readBytes(input, INPUT_SIZE); byte msgSize = sizeof(struct MyMessage); for(int i = 0; i+msgSize <= size; i++) { msg = (struct MyMessage*) input[i]; if(validateCheckSum(msg)) {// found a message processMessage(msg); } else { //discard this byte, it's a part of a corrupted msg (you are too late to process this one maybe) } } } If It's not a fixed size, it gets complicated. But i'm guessing you don't need to hear that for this case. EDIT (2) I've striked out this edit upon comments. One last thing, i would use a circular buffer. First add the received bytes into the buffer, then check the bytes in that buffer. EDIT (3) I gave thought on comments. I see the point of printable encoded messages. I guess my problem is working in a military company. We don't have printable encoded "fire" arguments here :) There are a lot of messages come and go all the time and decoding/encoding printable encoded messages would be waste of time. Also we use hardwares which usually has very small messages with bitfields. I accept that it could be more easy to examine/understand a printable message. Hope it helps, Gokhan.
If faster is really what you want....this is little far fetched. The fastest way I can think of to meet your needs and provide synchronization is by sending a byte for each color and changing the parity bit in a defined way assuming you can read the parity and bytes value of the character with wrong parity. You will have to deal with the changing parity and most of the characters will not be human readable, but it's gotta be one of the fastest ways to send three bytes of data.
Identifying a trend in C - Micro controller sampling
I'm working on an MC68HC11 Microcontroller and have an analogue voltage signal going in that I have sampled. The scenario is a weighing machine, the large peaks are when the object hits the sensor and then it stabilises (which are the samples I want) and then peaks again before the object roles off. The problem I'm having is figuring out a way for the program to detect this stable point and average it to produce an overall weight but can't figure out how :/. One way I have thought about doing is comparing previous values to see if there is not a large difference between them but I haven't had any success. Below is the C code that I am using: #include <stdio.h> #include <stdarg.h> #include <iof1.h> void main(void) { /* PORTA, DDRA, DDRG etc... are LEDs and switch ports */ unsigned char *paddr, *adctl, *adr1; unsigned short i = 0; unsigned short k = 0; unsigned char switched = 1; /* is char the smallest data type? */ unsigned char data[2000]; DDRA = 0x00; /* All in */ DDRG = 0xff; adctl = (unsigned char*) 0x30; adr1 = (unsigned char*) 0x31; *adctl = 0x20; /* single continuos scan */ while(1) { if(*adr1 > 40) { if(PORTA == 128) /* Debugging switch */ { PORTG = 1; } else { PORTG = 0; } if(i < 2000) { while(((*adctl) & 0x80) == 0x00); { data[i] = *adr1; } /* if(i > 10 && (data[(i-10)] - data[i]) < 20) */ i++; } if(PORTA == switched) { PORTG = 31; /* Print a delimeter so teemtalk can send to excel */ for(k=0;k<2000;k++) { printf("%d,",data[k]); } if(switched == 1) /*bitwise manipulation more efficient? */ { switched = 0; } else { switched = 1; } PORTG = 0; } if(i >= 2000) { i = 0; } } } } Look forward to hearing any suggestions :) (The graph below shows how these values look, the red box is the area I would like to identify.
As you sample sequence has glitches (short lived transients) try to improve the hardware ie change layout, add decoupling, add filtering etc. If that approach fails, then a median filter [1] of say five places long, which takes the last five samples, sorts them and outputs the middle one, so two samples of the transient have no effect on it's output. (seven places ...three transient) Then a computationally efficient exponential averaging lowpass filter [2] y(n) = y(n–1) + alpha[x(n) – y(n–1)] choosing alpha (1/2^n, division with right shifts) to yield a time constant [3] of less than the underlying response (~50samples), but still filter out the noise. Increasing the effective fractional bits will avoid the quantizing issues. With this improved sample sequence, thresholds and cycle count, can be applied to detect quiescent durations. Additionally if the end of the quiescent period is always followed by a large, abrupt change then using a sample delay "array", enables the detection of the abrupt change but still have available the last of the quiescent samples for logging. [1] http://en.wikipedia.org/wiki/Median_filter [2] http://www.dsprelated.com/showarticle/72.php [3] http://en.wikipedia.org/wiki/Time_constant Note Adding code for the above filtering operations will lower the maximum possible sample rate but printf can be substituted for something faster.
Continusously store the current value and the delta from the previous value. Note when the delta is decreasing as the start of weight application to the scale Note when the delta is increasing as the end of weight application to the scale Take the X number of values with the small delta and average them BTW, I'm sure this has been done 1M times before, I'm thinking that a search for scale PID or weight PID would find a lot of information.
Don't forget using ___delay_ms(XX) function somewhere between the reading values, if you will compare with the previous one. The difference in each step will be obviously small, if the code loop continuously.
Looking at your nice graphs, I would say you should look only for the falling edge, it is much consistent than leading edge. In other words, let the samples accumulate, calculate the running average all the time with predefined window size, remember the deviation of the previous values just for reference, check for a large negative bump in your values (like absolute value ten times smaller then current running average), your running average is your value. You could go back a little bit (disregarding last few values in your average, and recalculate) to compensate for small positive bump visible in your picture before each negative bump...No need for heavy math here, you could not model the reality better then your picture has shown, just make sure that your code detect the end of each and every sample. You have to be fast enough with sample to make sure no negative bump was missed (or you will have big time error in your data averaging). And you don't need that large arrays, running average is better based on smaller window size, smaller residual error in your case when you detect the negative bump.
Hash table implementation
I just bought a book "C Interfaces and Implementations". in chapter one , it has implemented a "Atom" structure, sample code as follow: #define NELEMS(x) ((sizeof (x))/(sizeof ((x)[0]))) static struct atom { struct atom *link; int len; char *str; } *buckets[2048]; static unsigned long scatter[] = { 2078917053, 143302914, 1027100827, 1953210302, 755253631, 2002600785, 1405390230, 45248011, 1099951567, 433832350, 2018585307, 438263339, 813528929, 1703199216, 618906479, 573714703, 766270699, 275680090, 1510320440, 1583583926, 1723401032, 1965443329, 1098183682, 1636505764, 980071615, 1011597961, 643279273, 1315461275, 157584038, 1069844923, 471560540, 89017443, 1213147837, 1498661368, 2042227746, 1968401469, 1353778505, 1300134328, 2013649480, 306246424, 1733966678, 1884751139, 744509763, 400011959, 1440466707, 1363416242, 973726663, 59253759, 1639096332, 336563455, 1642837685, 1215013716, 154523136, 593537720, 704035832, 1134594751, 1605135681, 1347315106, 302572379, 1762719719, 269676381, 774132919, 1851737163, 1482824219, 125310639, 1746481261, 1303742040, 1479089144, 899131941, 1169907872, 1785335569, 485614972, 907175364, 382361684, 885626931, 200158423, 1745777927, 1859353594, 259412182, 1237390611, 48433401, 1902249868, 304920680, 202956538, 348303940, 1008956512, 1337551289, 1953439621, 208787970, 1640123668, 1568675693, 478464352, 266772940, 1272929208, 1961288571, 392083579, 871926821, 1117546963, 1871172724, 1771058762, 139971187, 1509024645, 109190086, 1047146551, 1891386329, 994817018, 1247304975, 1489680608, 706686964, 1506717157, 579587572, 755120366, 1261483377, 884508252, 958076904, 1609787317, 1893464764, 148144545, 1415743291, 2102252735, 1788268214, 836935336, 433233439, 2055041154, 2109864544, 247038362, 299641085, 834307717, 1364585325, 23330161, 457882831, 1504556512, 1532354806, 567072918, 404219416, 1276257488, 1561889936, 1651524391, 618454448, 121093252, 1010757900, 1198042020, 876213618, 124757630, 2082550272, 1834290522, 1734544947, 1828531389, 1982435068, 1002804590, 1783300476, 1623219634, 1839739926, 69050267, 1530777140, 1802120822, 316088629, 1830418225, 488944891, 1680673954, 1853748387, 946827723, 1037746818, 1238619545, 1513900641, 1441966234, 367393385, 928306929, 946006977, 985847834, 1049400181, 1956764878, 36406206, 1925613800, 2081522508, 2118956479, 1612420674, 1668583807, 1800004220, 1447372094, 523904750, 1435821048, 923108080, 216161028, 1504871315, 306401572, 2018281851, 1820959944, 2136819798, 359743094, 1354150250, 1843084537, 1306570817, 244413420, 934220434, 672987810, 1686379655, 1301613820, 1601294739, 484902984, 139978006, 503211273, 294184214, 176384212, 281341425, 228223074, 147857043, 1893762099, 1896806882, 1947861263, 1193650546, 273227984, 1236198663, 2116758626, 489389012, 593586330, 275676551, 360187215, 267062626, 265012701, 719930310, 1621212876, 2108097238, 2026501127, 1865626297, 894834024, 552005290, 1404522304, 48964196, 5816381, 1889425288, 188942202, 509027654, 36125855, 365326415, 790369079, 264348929, 513183458, 536647531, 13672163, 313561074, 1730298077, 286900147, 1549759737, 1699573055, 776289160, 2143346068, 1975249606, 1136476375, 262925046, 92778659, 1856406685, 1884137923, 53392249, 1735424165, 1602280572 }; const char *Atom_new(const char *str, int len) { unsigned long h; int i; struct atom *p; assert(str); assert(len >= 0); for (h = 0, i = 0; i < len; i++) h = (h<<1) + scatter[(unsigned char)str[i]]; h &= NELEMS(buckets)-1; for (p = buckets[h]; p; p = p->link) if (len == p->len) { for (i = 0; i < len && p->str[i] == str[i]; ) i++; if (i == len) return p->str; } p = ALLOC(sizeof (*p) + len + 1); p->len = len; p->str = (char *)(p + 1); if (len > 0) memcpy(p->str, str, len); p->str[len] = '\0'; p->link = buckets[h]; buckets[h] = p;//insert atom in front of list return p->str; } at end of chapter , in exercises 3.1, the book's author said "Most texts recommend using a prime number for the size of buckets. Using a prime and a good hash function usually gives a better distribution of the lengths of the lists hanging off of buckets. Atom uses a power of two, which is sometimes explicitly cited as a bad choice. Write a program to generate or read, say, 10,000 typical strings and measure Atom_new’s speed and the distribution of the lengths of the lists. Then change buckets so that it has 2,039 entries (the largest prime less than 2,048), and repeat the measurements. Does using a prime help? How much does your conclusion depend on your specific machine?" so I did changed that hash table size to 2039,but it seems a prime number actually made a bad distribution of the lengths of the lists, I have tried 64, 61, 61 actually made a bad distribution too. I am just want to know why a prime table size make a bad distribution, is this because the hash function used with Atom_new a bad hash function? I am using this function to print out the lengths of the atom lists #define B_SIZE 2048 void Atom_print(void) { int i,t; struct atom *atom; for(i= 0;i<B_SIZE;i++) { t = 0; for(atom=buckets[i];atom;atom=atom->link) { ++t; } printf("%d ",t); } }
Well, along time ago I had to implement a hash table (in driver development), and I about the same. Why the heck should I use a prime number? OTOH power of 2 is even better - instead of calculating the modulus in case of power of 2 you may use bitwise AND. So I've implemented such a hash table. The key was a pointer (returned by some 3rd-party function). Then, eventually I noticed that in my hash table only 1/4 of all the entries is filled. Because that hash function I used was identity function, and just in case it turned out that all the returned pointers are multiples of 4. The idea of using the prime numbers for the hash table size is the following: real-world hash functions do not produce equally-distributed values. Usually there's (or at least there may be) some dependency. So, in order to diffuse this distribution it's recommended to use prime numbers. BTW, theoretically there may happen that occasionally the hash function will produce the numbers that are multiples of your chosen prime number. But the probability of this is lower than if it was not a prime number.
I think it's the code to select the bucket. In the code you pasted it says: h &= NELEMS(buckets)-1; That works fine for sizes which are powers of two, since its final effect is choosing the lower bits of h. For other sizes, NELEMS(buckets)-1 will have bits in 0 and the bit-wise & operator will discard those bits, effectively leaving "holes" in the bucket list. The general formula for bucket selection is: h = h % NELEMS(buckets);
This is what Julienne Walker from Eternally Confuzzled has to say about hash table sizes: When it comes to hash tables, the most recommended table size is any prime number. This recommendation is made because hashing in general is misunderstood, and poor hash functions require an extra mixing step of division by a prime to resemble a uniform distribution. Another reason that a prime table size is recommended is because several of the collision resolution methods require it to work. In reality, this is a generalization and is actually false (a power of two with odd step sizes will typically work just as well for most collision resolution strategies), but not many people consider the alternatives and in the world of hash tables, prime rules.
There's another factor at work here and that is that the constant hashing values should all be odd/prime and widely dispersed. If you have an even number of units (characters for instance) in the key to be hashed then having all odd constants will give you an even initial hash value. For an odd number of units you'd get an odd number. I've done some experimenting with this and just the 50/50% split was worth a lot in evening the distribution. Of course if all keys are equally long this doesn't matter. The hashing also needs to ensure that you won't get the same initial hash value for "AAB" as for "ABA" or "BAA".
Techniques for handling short reads/writes with scatter-gather?
Scatter-gather - readv()/writev()/preadv()/pwritev() - reads/writes a variable number of iovec structs in a single system call. Basically it reads/write each buffer sequentially from the 0th iovec to the Nth. However according to the documentation it can also return less on the readv/writev calls than was requested. I was wondering if there is a standard/best practice/elegant way to handle that situation. If we are just handling a bunch of character buffers or similar this isn't a big deal. But one of the niceties is using scatter-gather for structs and/or discrete variables as the individual iovec items. How do you handle the situation where the readv/writev only reads/writes a portion of a struct or half of a long or something like that. Below is some contrived code of what I am getting at: int fd; struct iovec iov[3]; long aLong = 74775767; int aInt = 949; char aBuff[100]; //filled from where ever ssize_t bytesWritten = 0; ssize_t bytesToWrite = 0; iov[0].iov_base = &aLong; iov[0].iov_len = sizeof(aLong); bytesToWrite += iov[0].iov_len; iov[1].iov_base = &aInt; iov[1].iov_len = sizeof(aInt); bytesToWrite += iov[1].iov_len; iov[2].iov_base = &aBuff; iov[2].iov_len = sizeof(aBuff); bytesToWrite += iov[2].iov_len; bytesWritten = writev(fd, iov, 3); if (bytesWritten == -1) { //handle error } if (bytesWritten < bytesToWrite) //how to gracefully continue?.........
Use a loop like the following to advance the partially-processed iov: for (;;) { written = writev(fd, iov+cur, count-cur); if (written < 0) goto error; while (cur < count && written >= iov[cur].iov_len) written -= iov[cur++].iov_len; if (cur == count) break; iov[cur].iov_base = (char *)iov[cur].iov_base + written; iov[cur].iov_len -= written; } Note that if you don't check for cur < count you will read past the end of iov which might contain zero.
AFAICS the vectored read/write functions work the same wrt short reads/writes as the normal ones. That is, you get back the number of bytes read/written, but this might well point into the middle of a struct, just like with read()/write(). There is no guarantee that the possible "interruption points" (for lack of a better term) coincide with the vector boundaries. So unfortunately the vectored IO functions offer no more help for dealing with short reads/writes than the normal IO functions. In fact, it's more complicated since you need to map the byte count into an IO vector element and offset within the element. Also note that the idea of using vectored IO for individual structs or data items might not work that well; the max allowed value for the iovcnt argument (IOV_MAX) is usually quite small, something like 1024 or so. So if you data is contiguous in memory, just pass it as a single element rather than artificially splitting it up.
Vectored write will write all the data you have provided with one call to "writev" function. So byteswritten will be always be equal to total number of bytes provided as input. this is what my understanding is. Please correct me if I am wrong