While I was solving today's AOC challenge I stumbled upon an interesting phenomenon where it appears that there is a problem with the GCC compiler or I have done something incorrect which I have yet to realize.
In the below code, I am parsing a list of numbers offset on a line read from a file input.txt:
Starting items: 83, 62, 93
Something appears to go wrong when storing the result in the num variable. I have yet to check the disassembly to see what is going on.
#include "stdio.h"
#include "stdlib.h"
#include "../../includes/fmt.c"
void problem();
int main() {
FILE * fp = fopen("input.txt", "r");
char * line = NULL;
size_t len = 0;
size_t length = (getline(&line, &len, fp) - 16) >> 2;
problem(line, length);
}
void problem(char * line, size_t length) {
for (size_t i = 0; i < length; ++i) {
int num = 10 * (line[18 + (i << 2)]-'0')
+ (line[19 + (1 << 2)]-'0');
println("{2i: = }", num, 10 * (line[18 + (i << 2)]-'0')
+ (line[19 + (i << 2)]-'0'));
}
}
Output:
82 = 83
62 = 62
92 = 93
I know that the error is not in my custom print function as I have tried without it.
So my question is. Is this a fault of my own or perhaps an error of the GCC?
Linux 5.15.82-1-lts x86_64
GCC 12.2.0
It looks like there may be an error in the code on the line
int num = 10 * (line[18 + (i << 2)]-'0') + (line[19 + (1 << 2)]-'0');
The index of the second character being read appears to be incorrect. Instead of line[19 + (1 << 2)], it should be line[19 + (i << 2)] to access the correct character in the string.
Related
I am trying to solve a execise, which amis to find the Last Digit of a Large Fibonacci Number, and I try to search for others' solution, and I find one here: https://www.geeksforgeeks.org/program-find-last-digit-nth-fibonnaci-number/, then I copy and paste the method 2, and I just changed the ll f[60] = {0}; to ll f[60]; but this doesn't work properly on CLion, my test code
int n; std:cin>>n;
`
for (int i = 0; i < n; i++) {
std::cout << findLastDigit(i) << '\n';
}
return 0;
}` the error: SIGSEGV (Segmentation fault). Could someone give me a hint or reference or something?
Correct me if I'm totally off base here, but if I'm looking at this correctly, you don't need to actually calculate anything ::
the last digit of Fibonacci sequence numbers appears to have a predictable pattern that repeats every 60th time, as such (starting with F.0 ) :
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291
011235831459437077415617853819099875279651673033695493257291 ….etc
So all you have to do is quickly compute the list from F.0 to F.59, then take whatever insanely large input , modulo-% 60, and simply look up this reference array.
———————————————
UPDATE 1 : upon further research, it seems there's more of a pattern to it :
last 1 : every 60
last 2 : every 300 ( 5x)
last 3 : every 1,500 ( 5x)
last 4 % 5,000 : every 7,500 ( 5x)
last 4 : every 15,000 (10x)
last 5 % 50,000 : every 75,000 ( 5x)
last 5 : every 150,000 (10x)
For a large number, you probably want to utilize a cache. Could you do something like this?
// Recursive solution
int fib(int n, int cache[]) {
if (n == 0) {
return 0;
}
if (n == 1) {
return 1;
}
if (cache[n]!= 0) {
return cache[n];
}
cache[n] = fib(n - 1, cache) + fib(n - 2, cache);
return cache[n];
}
// Iterative solution
int fib(int n) {
int cache[n + 1];
cache[0] = 0;
cache[1] = 1;
for (int i = 2; i <= n; i++) {
cache[i] = cache[i - 1] + cache[i - 2];
}
return cache[n];
}
(Re-write)
Segfaults are caused when trying to read or write an illegal memory location.
Running the original code already produces an access violation on my machine.
I modified the original code at two locations. I replaced #include<bits/stdc++.h> with #include <iostream> and added one line of debug output:
// Optimized Program to find last
// digit of nth Fibonacci number
#include<iostream>
using namespace std;
typedef long long int ll;
// Finds nth fibonacci number
ll fib(ll f[], ll n)
{
// 0th and 1st number of
// the series are 0 and 1
f[0] = 0;
f[1] = 1;
// Add the previous 2 numbers
// in the series and store
// last digit of result
for (ll i = 2; i <= n; i++)
f[i] = (f[i - 1] + f[i - 2]) % 10;
cout << "n (valid range 0, ... ,59): " << n << endl;
return f[n];
}
// Returns last digit of n'th Fibonacci Number
int findLastDigit(int n)
{
ll f[60] = {0};
// Precomputing units digit of
// first 60 Fibonacci numbers
fib(f, 60);
return f[n % 60];
}
// Driver code
int main ()
{
ll n = 1;
cout << findLastDigit(n) << endl;
n = 61;
cout << findLastDigit(n) << endl;
n = 7;
cout << findLastDigit(n) << endl;
n = 67;
cout << findLastDigit(n) << endl;
return 0;
}
Compiling and running it on my machine:
$ g++ fib_original.cpp
$ ./a.out
n (valid range 0, ... ,59): 60
zsh: abort ./a.out
ll f[60] has indices ranging from 0 to 59 and index 60 is out of range.
Compiling and running the same code on https://www.onlinegdb.com/
n (valid range 0, ... ,59): 60
1
n (valid range 0, ... ,59): 60
1
n (valid range 0, ... ,59): 60
3
n (valid range 0, ... ,59): 60
3
Although it is an out-of-range access that environment handles it just fine.
In order to find the reason why it is running with array initialization and crashing without on your machine needs some debugging on your machine.
My suspicion is that when the array gets initialized the memory layout changes allowing to use the one additional entry.
Please note that access outside of the array bounds is undefined behavior as explained in Accessing an array out of bounds gives no error, why?.
The task is:
Input from file input.txt and output to file output.txt
The first line is target number.
The second line is sequence of positive integers in range 1 to 999999999.
If any two of these integers in sum equals to the target the program has to output 1 otherwise 0.
Example:
5
1 7 3 4 7 9
Output: 1
There is my program. It pass 5 tests and fails the 6th - wrong result. I need help to find the bug or rewrite it.
#include <stdio.h>
int main() {
FILE *fs = fopen("input.txt", "r");
int target;
fscanf(fs, "%d", &target);
unsigned char bitset[1 + target / 8];
int isFound = 0;
for (int number; !isFound && fscanf(fs, "%d", &number) == 1;) {
if (number <= target) {
const int compliment = target - number;
isFound = (bitset[compliment / 8] & (1 << (compliment % 8))) > 0;
bitset[number / 8] |= 1 << (number % 8);
}
}
fclose(fs);
fs = fopen("output.txt", "w");
fprintf(fs,"%d", isFound);
fclose(fs);
return 0;
}
In your code, you forget to clear the local array, so you get undefined behavior and incorrect results. Note that variable sized arrays cannot be initialized with an initializer so you should use memset to clear this array.
The problem with this approach is target could be very large, up to 1999999997, which makes it impractical to define a bit array of the necessary size with automatic storage. You should allocate this array with calloc() so it is initialized to 0.
Here is a modified version:
#include <limits.h>
#include <stdio.h>
int main() {
FILE *fs = fopen("input.txt", "r");
unsigned long target, number; /* type long has at least 32 bits */
int isFound = 0;
if (!fs || fscanf(fs, "%lu", &target) != 1)
return 1;
if (target <= 1999999998 && target / 8 < SIZE_MAX) {
unsigned char *bitset = calloc(1, 1 + target / 8);
if (bitset != NULL) {
while (!isFound && fscanf(fs, "%lu", &number) == 1) {
if (number <= target) {
unsigned long complement = target - number;
isFound = (bitset[complement / 8] >> (complement % 8)) & 1;
bitset[number / 8] |= 1 << (number % 8);
}
}
free(bitset);
}
}
fclose(fs);
fs = fopen("output.txt", "w");
fprintf(fs, "%d", isFound);
fclose(fs);
return 0;
}
For very large target values, a different approach can be used, using a hash table:
read the next value number
if target-number is in the hash table, found=1, stop
if not, store number in the hash table
continue
At least this problem:
Code attempts to read uninitialized data
bitset[] not initialized.
isFound = (bitset[compliment / 8] ....
Suggest initializing:
unsigned char bitset[1 + target / 8];
memset(bitset, 0, sizeof bitset);
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have been searching through the internet for several days to find a solution to the following problem.
In my program I am reading chunks of data from two 16 bit .wav files into sound buffers (arrays of type short) for which I allocate memory on the heap. The data is cast to double for the fftw functions and processed and then scaled down and cast to short to be placed into a collection buffer before writing the output file to disk. In this way I reduce the number of times I have to access the hard disk since I am reading several chunks of data (i.e. moving through the file) and don't want to write to disk in each iteration.
Here is what I am doing:
short* sound_buffer_zero;
short* sound_buffer_one;
short* collection_buffer_one;
sound_buffer_zero = (short *) fftw_malloc(sizeof(short) * BUFFERSIZE);
sound_buffer_one = (short *) fftw_malloc(sizeof(short) * BUFFERSIZE);
collection_buffer_one = (short *) fftw_malloc(sizeof(short) * COLLECTIONLENGTH);
// read BUFFERSIZE samples from file into sound_buffer
inFileZero.read((char*)sound_buffer_zero, sizeof(short)*BUFFERSIZE);
inFileOne.read((char*)sound_buffer_one, sizeof(short)*BUFFERSIZE);
// typecast the short int values of sound_buffer into double values
// and write them to in_
for(int p = 0; p < BUFFERSIZE; ++p) {
*(in_zero + p) = (double)*(sound_buffer_zero + p);
*(in_one + p) = (double)*(sound_buffer_one + p);
}
// cross correlation in the frequency domain
// FFT on input zero (output is f_zero)
fftw_execute(p_zero);
// FFT on input one (output is f_one)
fftw_execute(p_one);
// complex multiplication (output is almost_one, also array of type double)
fastCplxConjProd(almost_one, f_zero, f_one, COMPLEXLENGTH);
// IFFT on almost_one (output is out_one, array of double)
fftw_execute(pi_one);
// finalize the output array (sort the parts correctly, output is final_one, array of double)
// last half without first value becomes first half of final array
for(int i = ARRAYLENGTH/2 + 1; i < ARRAYLENGTH; ++i) {
*(final_one + i - (ARRAYLENGTH/2 + 1)) = *(out_one + i);
}
// first half becomes second half of final array
for(int i = 0; i < ARRAYLENGTH/2; ++i) {
*(final_one + i + (ARRAYLENGTH/2 - 1)) = *(out_one + i);
}
short* scaling_vector;
scaling_vector = (short *) fftw_malloc(sizeof(short) * ARRAYLENGTH-1);
// fill the scaling_vector with the numbers from 1, 2, 3, ..., BUFFERSIZE, ..., 3, 2, 1
for(short i = 0; i < BUFFERSIZE; ++i) {
*(scaling_vector + i) = i + 1;
if(i + BUFFERSIZE > ARRAYLENGTH-1) break;
*(scaling_vector + i + BUFFERSIZE) = BUFFERSIZE - i - 1;
}
// scale values in relation to their position in the output array
// to values suitable for short int for storage
for(int i = 0; i < ARRAYLENGTH-1; ++i) {
*(final_one + i) = *(final_one + i) * SCALEFACTOR; // #define SCALEFACTOR SHRT_MAX/pow(2,42)
*(final_one + i) = *(final_one + i) / *(scaling_vector + i);
}
// transform the double values of final_ into rounded short int values
// and write them to the collection buffer
for(int p = 0; p < ARRAYLENGTH-1; ++p) {
*(collection_buffer_one + collectioncount*(ARRAYLENGTH) + p) = (short)round(*(final_one + p));
}
// write collection_buffer to disk
outFileOne.write((char*)collection_buffer_one, sizeof(short)*collectioncount*(ARRAYLENGTH));
The values that are computed in the cross-correlation are of type double and have positive or negative signs. By scaling them down, the sign does not change. But when I cast them to short the numbers that arrive in the collection_array are all positive.
The array is declared as short, not as unsigned short, and after scaling the values are in a range that short can hold (you have to trust me on this one, because I don't want to post all my code to keep the post readable). I don't care about the truncation of the decimal part, I don't need that for further computation, but the signs should stay the same.
Here is a little example for the input and output values (shown are the first 10 values in the arrays):
input: 157
input: 4058
input: -1526
input: 1444
input: -774
input: -1507
input: -1615
input: -1895
input: -987
input: -1729
// converted to double
as double: 157
as double: 4058
as double: -1526
as double: 1444
as double: -774
as double: -1507
as double: -1615
as double: -1895
as double: -987
as double: -1729
// after the computations
after scaling: -2.99445
after scaling: -42.6612
after scaling: -57.0962
after scaling: 41.0415
after scaling: -18.3168
after scaling: 43.5853
after scaling: -14.3663
after scaling: -3.58456
after scaling: -46.3902
after scaling: 16.0804
// in the collection array and before writing to disk
collection [short(round*(final_one))]: 3
collection [short(round*(final_one))]: 43
collection [short(round*(final_one))]: 57
collection [short(round*(final_one))]: 41
collection [short(round*(final_one))]: 18
collection [short(round*(final_one))]: 44
collection [short(round*(final_one))]: 14
collection [short(round*(final_one))]: 4
collection [short(round*(final_one))]: 46
collection [short(round*(final_one))]: 16
My question is, why are the signs not retained? Am I missing some internal conversion? I did not find an answer to my question in the other posts. If I missed it, please let me know and also If I left out important info for you. Thanks for your help!
Cheers,
mango
Here's the code for the test ouputs:
//contents of sound_buffer (input from file):
// test output
for(int i = 0; i < 10; ++i) {
cout << "input: " << *(sound_buffer_zero + i) << endl;
}
// content of in_ after converting to double
// test output
for(int i = 0; i < 10; ++i) {
cout << "as double: " << *(in_zero + i) << endl;
}
// contents of final_ after the scaling
// test output
for(int i = 0; i < 10; ++i) {
cout << "after scaling: " << *(final_one + i) << endl;
}
// contents of collection_buffer after converting to short
// test output
for(int i = 0; i < 10; ++i) {
cout << "collection [short(round*(final_one))]: " << *(collection_buffer_one + i) << endl;
}
Thanks to aleguna I found that the signs vanish in the following computations. I had totally missed that step where I do final_one = fabs(final_one). I had put that in for a test and totally forgotten about it.
Thank you all for your comments and answers. It turns out, I was just stupid. I am sorry.
What platform are you running this on?
I did a little test on linux x86, gcc 3.4.2
#include <iostream>
#include <math.h>
int main (int, char*[])
{
double a = -2.99445;
short b = (short)round(a);
std::cout << "a = " << a << " b = " << b << std::endl;
return 0;
}
output
a = -2.99445 b = -3
So I can think of two scenarios
You haven't shown us some code between scaling and converting to short
You run some exotic platform with non-standard double representation
How does the following run on your platform, when compiled with no optimization at all?
#include <stdlib.h>
#include <stdio.h>
double a[10] = {
-2.99445,
-42.6612,
-57.0962,
41.0415,
-18.3168,
43.5853,
-14.3663,
-3.58456,
-46.3902,
16.0804
};
int main(){
int i;
for (i=0;i<10;++i){
short k = (short)*(a + i);
printf("%d\n", k);
}
}
gives me the following results:
-2
-42
-57
41
-18
43
-14
-3
-46
16
Normally, short is 2-byte long while double is 8-byte long. Casting double to short causes the loss of the upper bytes. Even if 2 bytes is enough for your actual data without the sign, you'll loose the sign info which is stored in the upper bytes by sign extending.
Because I'm masochistic I'm trying to write something in C to decode an 8-bit PNG file (it's a learning thing, I'm not trying to reinvent libpng...)
I've got to the point when the stuff in my deflated, unfiltered data buffer unmistakably resembles the source image (see below), but it's still quite, erm, wrong, and I'm pretty sure there's something askew with my implementation of the filtering algorithms. Most of them are quite simple, but there's one major thing I don't understand in the docs, not being good at maths or ever having taken a comp-sci course:
Unsigned arithmetic modulo 256 is used, so that both the inputs and outputs fit into bytes.
What does that mean?
If someone can tell me that I'd be very grateful!
For reference, (and I apologise for the crappy C) my noddy implementation of the filtering algorithms described in the docs look like:
unsigned char paeth_predictor (unsigned char a, unsigned char b, unsigned char c) {
// a = left, b = above, c = upper left
char p = a + b - c; // initial estimate
char pa = abs(p - a); // distances to a, b, c
char pb = abs(p - b);
char pc = abs(p - c);
// return nearest of a,b,c,
// breaking ties in order a,b,c.
if (pa <= pb && pa <= pc) return a;
else if (pb <= pc) return b;
else return c;
}
void unfilter_sub(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (i < bpp ? 0 : out[i-bpp]);
}
void unfilter_up(char* out, char* in, int bpp, int row, int rowlen) {
for (int i = 0; i < rowlen; i++)
out[i] = in[i] + (row == 0 ? 0 : out[i-rowlen]);
}
void unfilter_paeth(char* out, char* in, int bpp, int row, int rowlen) {
char a, b, c;
for (int i = 0; i < rowlen; i++) {
a = i < bpp ? 0 : out[i - bpp];
b = row < 1 ? 0 : out[i - rowlen];
c = i < bpp ? 0 : (row == 0 ? 0 : out[i - rowlen - bpp]);
out[i] = in[i] + paeth_predictor(a, b, c);
}
}
And the images I'm seeing:
Source
Source http://img220.imageshack.us/img220/8111/testdn.png
Output
Output http://img862.imageshack.us/img862/2963/helloworld.png
It means that, in the algorithm, whenever an arithmetic operation is performed, it is performed modulo 256, i.e. if the result is greater than 256 then it "wraps" around. The result is that all values will always fit into 8 bits and not overflow.
Unsigned types already behave this way by mandate, and if you use unsigned char (and a byte on your system is 8 bits, which it probably is), then your calculation results will naturally just never overflow beyond 8 bits.
It means only the last 8 bits of the result is used. 2^8=256, the last 8 bits of unsigned value v is the same as (v%256).
For example, 2+255=257, or 100000001, last 8 bits of 257 is 1, and 257%256 is also 1.
In 'simple language' it means that you never go "out" of your byte size.
For example in C# if you try this it will fail:
byte test = 255 + 255;
(1,13): error CS0031: Constant value '510' cannot be converted to a
'byte'
byte test = (byte)(255 + 255);
(1,13): error CS0221: Constant value '510' cannot be converted to a
'byte' (use 'unchecked' syntax to override)
For every calculation you have to do modulo 256 (C#: % 256).
Instead of writing % 256 you can also do AND 255:
(175 + 205) mod 256 = (175 + 205) AND 255
Some C# samples:
byte test = ((255 + 255) % 256);
// test: 254
byte test = ((255 + 255) & 255);
// test: 254
byte test = ((1 + 379) % 256);
// test: 124
byte test = ((1 + 379) & 0xFF);
// test: 124
Note that you sometimes can simplify a byte-series:
(byteVal1 + byteVal2 + byteVal3) % 256
= (((byteVal1 % 256) + (byteVal2 % 256)) % 256 + (byteVal3 % 256)) % 256
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is there a printf converter to print in binary format?
Still learning C and I was wondering:
Given a number, is it possible to do something like the following?
char a = 5;
printf("binary representation of a = %b",a);
> 101
Or would i have to write my own method to do the transformation to binary?
There is no direct way (i.e. using printf or another standard library function) to print it. You will have to write your own function.
/* This code has an obvious bug and another non-obvious one :) */
void printbits(unsigned char v) {
for (; v; v >>= 1) putchar('0' + (v & 1));
}
If you're using terminal, you can use control codes to print out bytes in natural order:
void printbits(unsigned char v) {
printf("%*s", (int)ceil(log2(v)) + 1, "");
for (; v; v >>= 1) printf("\x1b[2D%c",'0' + (v & 1));
}
Based on dirkgently's answer, but fixing his two bugs, and always printing a fixed number of digits:
void printbits(unsigned char v) {
int i; // for C89 compatability
for(i = 7; i >= 0; i--) putchar('0' + ((v >> i) & 1));
}
Yes (write your own), something like the following complete function.
#include <stdio.h> /* only needed for the printf() in main(). */
#include <string.h>
/* Create a string of binary digits based on the input value.
Input:
val: value to convert.
buff: buffer to write to must be >= sz+1 chars.
sz: size of buffer.
Returns address of string or NULL if not enough space provided.
*/
static char *binrep (unsigned int val, char *buff, int sz) {
char *pbuff = buff;
/* Must be able to store one character at least. */
if (sz < 1) return NULL;
/* Special case for zero to ensure some output. */
if (val == 0) {
*pbuff++ = '0';
*pbuff = '\0';
return buff;
}
/* Work from the end of the buffer back. */
pbuff += sz;
*pbuff-- = '\0';
/* For each bit (going backwards) store character. */
while (val != 0) {
if (sz-- == 0) return NULL;
*pbuff-- = ((val & 1) == 1) ? '1' : '0';
/* Get next bit. */
val >>= 1;
}
return pbuff+1;
}
Add this main to the end of it to see it in operation:
#define SZ 32
int main(int argc, char *argv[]) {
int i;
int n;
char buff[SZ+1];
/* Process all arguments, outputting their binary. */
for (i = 1; i < argc; i++) {
n = atoi (argv[i]);
printf("[%3d] %9d -> %s (from '%s')\n", i, n,
binrep(n,buff,SZ), argv[i]);
}
return 0;
}
Run it with "progname 0 7 12 52 123" to get:
[ 1] 0 -> 0 (from '0')
[ 2] 7 -> 111 (from '7')
[ 3] 12 -> 1100 (from '12')
[ 4] 52 -> 110100 (from '52')
[ 5] 123 -> 1111011 (from '123')
#include<iostream>
#include<conio.h>
#include<stdlib.h>
using namespace std;
void displayBinary(int n)
{
char bistr[1000];
itoa(n,bistr,2); //2 means binary u can convert n upto base 36
printf("%s",bistr);
}
int main()
{
int n;
cin>>n;
displayBinary(n);
getch();
return 0;
}
Use a lookup table, like:
char *table[16] = {"0000", "0001", .... "1111"};
then print each nibble like this
printf("%s%s", table[a / 0x10], table[a % 0x10]);
Surely you can use just one table, but it will be marginally faster and too big.
There is no direct format specifier for this in the C language. Although I wrote this quick python snippet to help you understand the process step by step to roll your own.
#!/usr/bin/python
dec = input("Enter a decimal number to convert: ")
base = 2
solution = ""
while dec >= base:
solution = str(dec%base) + solution
dec = dec/base
if dec > 0:
solution = str(dec) + solution
print solution
Explained:
dec = input("Enter a decimal number to convert: ") - prompt the user for numerical input (there are multiple ways to do this in C via scanf for example)
base = 2 - specify our base is 2 (binary)
solution = "" - create an empty string in which we will concatenate our solution
while dec >= base: - while our number is bigger than the base entered
solution = str(dec%base) + solution - get the modulus of the number to the base, and add it to the beginning of our string (we must add numbers right to left using division and remainder method). the str() function converts the result of the operation to a string. You cannot concatenate integers with strings in python without a type conversion.
dec = dec/base - divide the decimal number by the base in preperation to take the next modulo
if dec > 0:
solution = str(dec) + solution - if anything is left over, add it to the beginning (this will be 1, if anything)
print solution - print the final number
This code should handle your needs up to 64 bits.
char* pBinFill(long int x,char *so, char fillChar); // version with fill
char* pBin(long int x, char *so); // version without fill
#define width 64
char* pBin(long int x,char *so)
{
char s[width+1];
int i=width;
s[i--]=0x00; // terminate string
do
{ // fill in array from right to left
s[i--]=(x & 1) ? '1':'0'; // determine bit
x>>=1; // shift right 1 bit
} while( x > 0);
i++; // point to last valid character
sprintf(so,"%s",s+i); // stick it in the temp string string
return so;
}
char* pBinFill(long int x,char *so, char fillChar)
{ // fill in array from right to left
char s[width+1];
int i=width;
s[i--]=0x00; // terminate string
do
{
s[i--]=(x & 1) ? '1':'0';
x>>=1; // shift right 1 bit
} while( x > 0);
while(i>=0) s[i--]=fillChar; // fill with fillChar
sprintf(so,"%s",s);
return so;
}
void test()
{
char so[width+1]; // working buffer for pBin
long int val=1;
do
{
printf("%ld =\t\t%#lx =\t\t0b%s\n",val,val,pBinFill(val,so,0));
val*=11; // generate test data
} while (val < 100000000);
}
Output:
00000001 = 0x000001 = 0b00000000000000000000000000000001
00000011 = 0x00000b = 0b00000000000000000000000000001011
00000121 = 0x000079 = 0b00000000000000000000000001111001
00001331 = 0x000533 = 0b00000000000000000000010100110011
00014641 = 0x003931 = 0b00000000000000000011100100110001
00161051 = 0x02751b = 0b00000000000000100111010100011011
01771561 = 0x1b0829 = 0b00000000000110110000100000101001
19487171 = 0x12959c3 = 0b00000001001010010101100111000011
You have to write your own transformation. Only decimal, hex and octal numbers are supported with format specifiers.