Anyone know why certain fields in proc.h in Minix are char, when I thought they'd be int?
37 char p_ticks_left; /* number of scheduling ticks left */
38 char p_quantum_size; /* quantum size in ticks */
So, if we want to add a new "int" field should we make it a char?
If char is big enough to hold all the necessary values, why not use it? Of course, int may be somewhat more performant, but at the same time char is usually smaller.
I believe you can use any type that makes sense.
consider from design, maybe It's enough to save value of "number of scheduling ticks left" and "quantum size in ticks". and size of char is smaller than size of int.
Related
Using C, One array with 5 memory spaces
int call[5]
I'm trying to figure out how to use the first 3 spaces of the array to make a base-36 conversion (meaning 1K0 base-36 equals to 2040 in base-10), the other 2 spaces would be filled with data (probably more ints).
However... does 1K0 look actually like an int? (to me K looks like a char and in theory, char should be enough -127 to 127 for the conversion using base-36)
however what would happen if i try to do this using int instead of char?
is there any alternative to use a base-36 conversion in the first array only mixed with ints for the rest of the spaces in memory
does it matter? (since the array was declared int)
EDIT: to be clear i just want to know if i can declare an int array and fill it with chars, and if i cant, how can i achieve this?
I'm not exactly sure if you can tell the compiler that, you can signal Hex (0x), octal (o) Binary (b) but base 36 is odd enough to not be standard.
You can always make a function (and maybe embed it in a class) that does the string-to-base10 conversion of base 36
I'll do my best in C terms
int base10Number = 0
int base36StringLenght = strlen(base35String);
for( int i = base36StringLenght - 1; i <= 0; i--){ //count from the end of the string back
char c = base36String[i];
if(c <= '9'){
c -= '0' //gives a 0 to 9 range
}
else{ //assuming that the string is perfect and has no extra characters is safe to just do this
c = tolower(c); //first make sure it's lowercase
c -= 'a' + 10 // this will make the letters start at 10 dec
}
base10Number += c * pow(36, base36StringLenght - 1 - i) // the power function specifies which 'wheel' you're turning (imagine an old analog odometer) and then turns the wheel c times, then adds it to the overall sum.
}
This whole code works by the theory that every digit starting from the last is worth its base 10 number multipled by 36 to the power of its position from last.
So last digit is c * 36^0 which is one, the c*36^1 and so on. similar on how 2*10^1 equals 20 if the 2 is in second-to-last position.
Hope that some of this makes sense to you, and dont forget to make your base36 number a string
EDIT:
I saw your edit to the answer and the short asnwer is yes, you can totally do that, it would be a waste of space since you'll have a whole 3 bytes unused at all times, you can simply make a char array. Besides, all string functions will demand you to feed them a char array (you can cast it) but if you're storing one digit per array space char will do the trick. If your array is dynamic and/or you need to make some math on it the base 36 to base 10 conversion will allow you to do math and to blow the whole array for a single int or float type. But if you're just going to store it to display it later or to feed it to another function in the same format the conversion is not necessary at all. (If you're working with a big ammount of this numbers and you need to put them in a database converting to base10 and storing in a single in will save tons of space)
PS: also edited the code to use ' ' enclosed chars instead of ascii numbers, thanks for the request!
I wrote a small program to compute Fibonacci numbers:
#include <stdio.h>
int main()
{
int first, second, final;
first = 0;
second = 1;
printf("0\n1\n"); /* I know this is a quick fix, but the program still works */
while (final <= 999999999999999999) {
final = first + second;
first = second;
second = final;
printf("%d\n", final);
}
}
Is there any way to increase the speed in which this program computes these calculations? Could you possible explain the solution to me (if it exists)?
Thanks.
Of course it's possible! :)
First of all, please note you're using signed int for your variables, and max int on a 32 bit machine is 2,147,483,647 which is approx. 10^8 times smaller than the max number you're using, which is 999,999,999,999,999,999.
I'll recommend to change the max num to INT_MAX (You'll need to include "limits.h")
Edit:
In addition, as said in the comments to my answer, consider changing to unsigned int. you need only positive values, and the max number will be twice higher.
2nd Edit:
That being said, when final will reach the closest it can to the limit in the condition, the next time its promoted, it'll exceed INT_MAX and will result in an overflow. That means here, that the condition will never be met.
Better to just change the condition to the times you want the loop to run. Please note though, that any fibonacci number larger than the max numder that can be stored in your variable type, will result in an overflow.
Secondly, final isn't initialized. Write final = 0 to avoid errors.
I recommend turning on all the warnings in your compiler. It could catch many errors before they compile :)
Also, I see no reason not to initialize the variables when you declare them. The value is already known.
Now for the speed of the program:
I'm not sure to which extent you're willing to change the code, but the simplest change without changing the original flow, is to make less calls to printf().
Since printf() is a function that will wait for a system resource to become available, this is probably the most time consuming part in your code.
Maybe consider storing the output in a string, and lets say every 100 numbers print the string to the screen.
Try maybe to create a string, with a size of
(10 (num of chars in an int) + 1 (new line char) )* 100 (arbitrary, based on when you'll want to flush the data to the screen)
Consider using sprintf() to write to a string in the inner loop, and strcat() to append a string to another string.
Then, every 100 times, use printf() to write to the screen.
As already stated in other answers, you have obvious two problems. 1) The missing initialization of final and 2) that your loop condition will result in an endless loop due to 999999999999999999 being larger than any integer value.
The real problem here is that you use a fixed number in the condition for the while.
How do you know which number to use so that you actually calculates all the Fibonacci numbers possible for the used integer type? Without knowing the numbers in advance you can't do that! So you need a better condition for stopping the loop.
One way of solving this to check for overflow instead - like:
while (second <= (INT_MAX - first)) { // Stop when next number will overflow
The above approach prevents signed overflow by checking whether the next first + second will overflow before actually doing the first+second. In this way signed overflow (and thereby UB) is prevented.
Another approach is to use unsigned integers and deliberately make an overflow (which is valid for unsigned int). Using unsigned long long that could look like:
unsigned long long first, second, next;
first = 1;
second = 1;
printf("1\n1\n");
next = first + second;
while (next > second) { // Stop when there was an overflow
printf("%llu\n", next);
first = second;
second = next;
next = first + second;
}
Speed isn't your problem. You have an infinite loop:
while (final <= 999999999999999999) {
final has type int. Most likely int is 32-bit on your system, which means the maximum value it can hold is 2147483647. This will always be smaller than 999999999999999999 (which is a constant of type long long), so the loop never ends.
Change the datatype of your variables to long long and the loop will terminate after about 87 iterations. Also, you'll need to change your printf format specifier from %d to %lld to match the datatype printed.
Why are you asking this question?
If it's the intention to increase the performance, you might go for the formula of the n-th Fibonacci number, which is something like:
((1+v5)/2)^n + ((1-v5)/2)^n, something like that (v5 being the square root of 5).
If it's about learning to increase performance, you might do a code review or use performance diagnostics tools.
Quick question for those more experienced in c...
I want to compute a SHA256 checksum using the functions from openssl for the current time an operation takes place. My code consists of the following:
time_t cur_time = 0;
char t_ID[40];
char obuf[40];
char * timeBuf = malloc(sizeof(char) * 40 + 1);
sprintf(timeBuf, "%s", asctime(gmtime(&cur_time)));
SHA256(timeBuf, strlen(timeBuf), obuf);
sprintf(t_ID, "%02x", obuf);
And yet, when I print out the value of t_ID in a debug statement, it looks like 'de54b910'. What am I missing here?
Edited to fix my typo around malloc and also to say I expected to see the digest form of a sha256 checksum, in hex.
Since obuf is an array, printing its value causes it to decay to a pointer and prints the value of the memory address that the array is stored at. Write sensible code to print a 256-bit value.
Maybe something like:
for (int i = 0; i < 32; ++i)
printf("%02X", obuf[i]);
This is not really intended as an answer, I'm just sharing a code fragment with the OP.
To hash the binary time_t directly without converting the time to a string, you could use something like (untested):
time_t cur_time;
char t_ID[40];
char obuf[40];
gmtime(&cur_time);
SHA256(&cur_time, sizeof(cur_time), obuf);
// You know this doesn't work:
// sprintf(t_ID, "%02x", obuf);
// Instead see https://stackoverflow.com/questions/6357031/how-do-you-convert-buffer-byte-array-to-hex-string-in-c
How do you convert buffer (byte array) to hex string in C?
This doesn't address byte order. You could use network byte order functions, see:
htons() function in socket programing
http://beej.us/guide/bgnet/output/html/multipage/htonsman.html
One complication: the size of time_t is not specified, it can vary by platform. It's traditionally 32 bits, but on 64 bit machines it can be 64 bits. It's also usually the number of seconds since Unix epoc, midnight, January 1, 1970.
If you're willing to live with assumption that the resolution is seconds and don't have to worry about the code working in 20 years (see: https://en.wikipedia.org/wiki/Year_2038_problem) then you might use (untested):
#include <netinet/in.h>
time_t cur_time;
uint32_t net_cur_time; // cur_time converted to network byte order
char obuf[40];
gmtime(&cur_time);
net_cur_time = htonl((uint32_t)cur_time);
SHA256(&net_cur_time, sizeof(net_cur_time), obuf);
I'll repeat what I mentioned in a comment: it's hard to understand what you possibly hope to gain from this hash, or why you can't use the timestamp directly. Cryptographically secure hashes such as SHA256 go through a lot of work to ensure the hash is not reversible. You can't benefit from that because the input data is from a limited known set. At the very least, why not use CRC32 instead because it's much faster.
Good luck.
Today I am trying to copy a unsigned long variable into the contents of an unsigned char * variable.
The reasoning for this is, I wrote an RC4 cipher which requires the key input to be a unsigned char *, I am using the SYSTEMTIME class to obtain a value & combining it with a randomly generated long value to obtain my key for RC4 - I am using it as a timestamp for a user created account to mark in my sqlite dbs.
Anyways, the problem I ran into is that I cannot copy the ULONG into PUCHAR.
I've tried
wsprintfA(reinterpret_cast<LPSTR>(ucVar), "%lu", ulVar);
and I've tried
wsprintfA((LPSTR)ucVar, "%lu", ulVar);
However, after executing my program the result in ucVar is just empty, or it doesn't even compute, and crashing the application.
[edit 1]
I thought maybe the memcpy approach would work, so I tried declaring another variable and moving it into ucVar, but it still crashed the application - i.e. It didn't reach the MessageBox():
unsigned char *ucVar;
char tmp[64]; // since ulVar will never be bigger than 63 character + 1 for '\0'
wsprintfA(tmp, "%lu", ulVar);
memcpy(ucVar, tmp, sizeof(tmp));
MessageBox(0, (LPSTR)ucVar, "ucVar", 0);
[/edit 1]
[edit 2]
HeapAlloc() on ucVar with of size 64 fixed my problem, thank you ehnz for your suggestion!
[/edit 2]
Can anyone give me some approach to this problem? It is greatly appreciated!
Regards,
Andrew
Unless you have ownership of memory you're trying to use, all kinds of things can happen. These may range from the error going unnoticed because nothing else already owns that memory, to an instant crash, to a value that disappears because something else overwrites the memory between the time that you set it and the time that you attempt to retrieve a value from it.
Fairly fundamental concepts when dealing with dynamic memory allocation, but quite the trap for the uninitiated.
I'm going to do my best to explain exactly what my problem is. I'm currently taking an embedded systems class and I'm really struggling with this portion of the project. I have a small STM32 board with a simple LCD screen connected to it. I have a function already written that will take a single char and write it to the LCD. Now, I often use that function to write char's but there is one scenario in which I need to write an int between 0 and 99 to the screen. The int variable is always changing because it is based on the value in a timer when a user presses the button on the board. I have been stumped by this for hours and I could really use some help. I have emailed my teacher but he isn't replying. Any help is greatly appreciated.
Also, the LCD_write function is already provided to me by my teacher so I can't just create a version of it that will accept an int.
If I haven't given enough detail just let me know.
You just need to convert your int to two chars:
char ch0 = i % 10 + '0'; // convert least significant digit to char
char ch1 = i / 10 + '0'; // convert most significant digit to char
Note: this assumes that you can guarantee that the int is always in the range 0..99.