C RS232 comm. How to compare CPU time? - c

First time posting so there's probably gonna be more info than necessary but I wanna be thorough:
One of our exercises in C was to create sender and receiver programs that would exchange data via RS232 serial communication with null modem. We used a virtual port program (I used the trial version of Virtual Serial Port by eltima software if you want to test). We were required to do 4 versions:
1) Using a predetermined library created by a previous student that had sender and reveiver etc. premade functions
2) Using the inportb and outportb functions
3) Using OS interrupt int86 and giving register values through the REGS union
4) Using inline assembly
Compiler: DevCPP (Bloodshed).
All worked, but now we are required to compare all the different versions based on the CPU time that is spent to send and receive a character. It specifically says that we have to find the following:
average, standard deviation, min, max and 99,5 %
Nothing was explained in class so I'm a little lost here...I'm guessing those are statistical numbers after many trials of the normal distribution? But even then how do I actually measure CPU cycles on this? I'll keep searching but I'm posting here in the mean time 'cause the deadline is in 3 days :D.
Code sample of the int86 version:
#include <stdio.h>
#include <stdlib.h>
#include <dos.h>
#define RS232_INIT_FUNCTION 0
#define RS232_SEND_FUNCTION 1
#define RS232_GET_FUNCTION 2
#define RS232_STATUS_FUNCTION 3
#define DATA_READY 0x01
#define PARAM 0xEF
#define COM1 0
#define COM2 1
void rs232init (int port, unsigned init_code)
{
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_INIT_FUNCTION;
inregs.h.al=init_code;
int86(0x14,&inregs,&inregs);
}
unsigned char rs232transmit (int port, char ch)
{
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_SEND_FUNCTION;
inregs.h.al=ch;
int86(0x14,&inregs,&inregs);
return (inregs.h.ah);
}
unsigned char rs232status(int port){
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_STATUS_FUNCTION;
int86(0x14, &inregs, &inregs);
return (inregs.h.ah); //Because we want the second byte of ax
}
unsigned char rs232receive(int port)
{
int x,a;
union REGS inregs;
while(!(rs232status(port) & DATA_READY))
{
if(kbhit()){
getch();
exit(1);
}
};
inregs.x.dx=port;
inregs.h.ah=RS232_GET_FUNCTION;
int86(0x14,&inregs,&inregs);
if(inregs.h.ah & 0x80)
{
printf("ERROR");
return -1;
}
return (inregs.h.al);
}
int main(){
unsigned char ch;
int d,e,i;
do{
puts("What would you like to do?");
puts("1.Send data");
puts("2.Receive data");
puts("0.Exit");
scanf("%d",&i);
getchar();
if(i==1){
rs232init(COM1, PARAM);
puts("Which char would you like to send?");
scanf("%c",&ch);
getchar();
while(!rs232status(COM1));
d=rs232transmit(COM1,ch);
if(d & 0x80) puts("ERROR"); //Checks the bit 7 of ah for error
}
else if(i==2){
rs232init(COM1,PARAM);
puts("Receiving character...");
ch=rs232receive(COM1);
printf("%c\n",ch);
}
}while(i != 0);
system("pause");
return 0;
}

There is some guesswork required here because the question is a little undefined.
You've listed four different methods for sending/receiving a character. What I suspect your lecturer is looking for is the time from when you call the method given (or enter your inline assembly code) to the time when you return from the method (leave inline code). You will need to grab a time just before the call and just after the call and find their difference.
Less ambiguous is cpu time. The clock() method is the most straightforward way to do this, however this may not be what the lecturer is looking for.
Finally are the statistics, which is straightforward. Do a bunch of runs, and run some statistics on the times

Related

Big latency in bluetooth communication

I have tried to write wireless servo control using two arduino nano v3 boards and two bluetooth 4.0 modules. First code is transmitter. It's very simple. It reads PPM signals and transform to separates PWM values for each channel. I use hardware serial port.
#include <PPMReader.h>
#include <InterruptHandler.h>
int ppmInputPin = 3;
int channelAmount = 2;
PPMReader ppm(ppmInputPin, channelAmount);
void setup()
{
Serial.begin(9600);
Serial.write("AT\r\n");
delay(10);
Serial.write("AT\r\n");
Serial.write("AT+INQ\r\n");
delay(5000);
Serial.write("AT+CONN1\r\n");
}
void loop()
{
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
Serial.println(value1);
}
Receiver is simple too. It reads values from bluetooth and parse into integer value and sends to servo by 7th pin. Again I have used hardware serial port.
#include <Servo.h>
int PWM_OUTPUT = 7;
Servo servo;
void setup() {
servo.attach(PWM_OUTPUT);
Serial.begin(9600);
}
void loop() {
int pwmValue = Serial.parseInt();
if (Serial.available()) {
if(pwmValue > 900 && pwmValue < 2001) {
servo.writeMicroseconds(pwmValue);
}
}
}
All it works. But it has delay around 2-3 seconds. Can be problem in "spamming" serial port?
The first thing you need to ask yourself when implementing a device-to-device communication is how fast should I be sending? and if I send at that rate: is the receiver going to be able to keep pace (reading, doing processing or whatever it needs to do and answer back)?
This is obviously not about the baud rate but about what your loops are doing. You are using two different libraries: PPMReader and Servo. Now, pay attention to what each device is doing in their respective loops:
//Sending
void loop() {
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
Serial.println(value1);
}
//Receiving
void loop() {
int pwmValue = Serial.parseInt();
if(pwmValue > 900 && pwmValue < 2001) {
servo.writeMicroseconds(pwmValue);
}
}
I don't really know how long it takes to execute each line of code (take a look here for some comments on that) but you cannot seriously expect both loops to magically synchronize themselves. Considering they are doing very different things (leaving out the serial part) dealing with different hardware, I would expect one of them to take significantly longer than the other. Think about what happens if that's the case.
As I said, I have no idea how long it takes to call ppm.latestValidChannelValue(1, 0) but for the sake of my argument let's say it takes 0.1 milliseconds. To have an estimate of the time it takes to complete one iteration around the loop you need to add the time it takes to print one (or two) bytes to the port with Serial.println(value1) but that's easier, maybe around 20-100 microseconds is a good ballpark figure. With these estimates, you end up reading 5000 times per second. If you are not happy or you don't trust my estimates I would suggest you do your own tests with a counter or a timer. If you do the same exercise for the other side of the link and let's say you get it's twice as fast, it runs 10000 times per second, how do you think it would happen with the communication? Yes, that's right: it will get clogged and run at snail pace.
Here you should carefully consider if you really need that many readings (you did not elaborate on what you're actually doing so I have no idea, but I lean on thinking you don't). If you don't, just add a delay on the sender's side to slow it down to a reasonable (maybe 10-20 iterations per second) speed.
There are other things to improve on your code: you should check you have received data in the buffer before reading it (not after). And you need to be careful with Serial.parseInt(), which sometimes leads to unexpected results but this answer is already too long and I don't want to extend it even more.
I found problem. It was in serial port spamming. I have added check if current value is not equal with previous value and it have started work and next small issue was in receiver. I read value before it was available.
#include <PPMReader.h>
#include <InterruptHandler.h>
int ppmInputPin = 3;
int channelAmount = 2;
PPMReader ppm(ppmInputPin, channelAmount);
volatile unsigned long previousValue1 = 0;
void setup()
{
Serial.begin(9600);
Serial.write("AT\r\n");
delay(10);
Serial.write("AT\r\n");
Serial.write("AT+INQ\r\n");
delay(5000);
Serial.write("AT+CONN1\r\n");
Serial.println("Transmitter started");
}
void loop()
{
unsigned long value1 = ppm.latestValidChannelValue(1, 0);
if(previousValue1 != value1) {
previousValue1 = value1;
Serial.println(value1);
}
}

Check only last 3 digits of sensor output

I have a library from WiringPi for DHT11 sensor and I need to modify condition which checks if the value read from sensor is good.
Sometimes the library reads bad values which are 255.255,255.255 or 55,255.255 etc.
sample output
There is the condition in the library:
if(counter==255)
break;
But it doesn't work if the value is e.g. 55,255.255
How can I modify this condition the check last 3 digits of output?
If the output is wrong, there are always "255" at the end of value.
I tried to add conditions like
if(counter==255)
break;
else if(counter==255.255)
break;
But it doesn't solve all possible situations and I realy don't know anything about C/C++
Here is the whole library:
#include <wiringPi.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#define MAX_TIME 85
#define DHT11PIN 7
#define ATTEMPTS 5
int dht11_val[5]={0,0,0,0,0};
int dht11_read_val()
{
uint8_t lststate=HIGH;
uint8_t counter=0;
uint8_t j=0,i;
for(i=0;i<5;i++)
dht11_val[i]=0;
pinMode(DHT11PIN,OUTPUT);
digitalWrite(DHT11PIN,LOW);
delay(18);
digitalWrite(DHT11PIN,HIGH);
delayMicroseconds(40);
pinMode(DHT11PIN,INPUT);
for(i=0;i<MAX_TIME;i++)
{
counter=0;
while(digitalRead(DHT11PIN)==lststate){
counter++;
delayMicroseconds(1);
if(counter==255)
break;
}
lststate=digitalRead(DHT11PIN);
if(counter==255)
break;
// top 3 transistions are ignored
if((i>=4)&&(i%2==0)){
dht11_val[j/8]<<=1;
if(counter>16)
dht11_val[j/8]|=1;
j++;
}
}
// verify checksum and print the verified data
if((j>=40)&&(dht11_val[4]==((dht11_val[0]+dht11_val[1]+dht11_val[2]+dht11_val[3])& 0xFF)))
{
printf("%d.%d,%d.%d\n",dht11_val[0],dht11_val[1],dht11_val[2],dht11_val[3]);
return 1;
}
else
return 0;
}
int main(void)
{
int attempts=ATTEMPTS;
if(wiringPiSetup()==-1)
exit(1);
while(attempts)
{
int success = dht11_read_val();
if (success) {
break;
}
attempts--;
delay(500);
}
return 0;
}
No single variable in your code can hold "255.255", that would require a string or a float. You are obviously referring to the output of
printf("%d.%d,%d.%d\n",dht11_val[0],dht11_val[1],dht11_val[2],dht11_val[3]);.
This printf can never produce a three-value output like 55,255.255.
I assume that your output would be 55.255,255.255.
This in turn means that in case of error you will find the "last three digits" in dht11_val[3].
If my assumption is not correct please provide much more detail on the error circumstances.
On the other hand, I suspect that looking for that value is not the solution for your problem either. The function is more complicated. The value of 255 seems the result of an endless loop which is detected by breaking early at counter = 255. So I am pretty sure that checking "the last three digits" is a LESS precise check than what is already implemented.

Uart receives correct Bytes but in chaotic order

Using Atmel studio 7, with STK600 and 32UC3C MCU
I'm pulling my hair over this.
I'm sending strings of a variable size over UART once every 5 seconds. The String consists of one letter as opcode, then two chars are following that tell the lenght of the following datastring (without the zero, there is never a zero at the end of any of those strings). In most cases the string will be 3 chars in size, because it has no data ("p00").
After investigation I found out that what supposed to be "p00" was in fact "0p0" or "00p" or (only at first try after restarting the micro "p00"). I looked it up in the memory view of the debugger. Then I started hTerm and confirmed that the data was in fact "p00". So after a while hTerm showed me "p00p00p00p00p00p00p00..." while the memory of my circular uart buffer reads "p000p000p0p000p0p000p0p0..."
edit: Actually "0p0" and "00p" are alternating.
The baud rate is 9600. In the past I was only sending single letters. So everything was running well.
This is the code of the Receiver Interrupt:
I tried different variations in code that were all doing the same in a different way. But all of them showed the exact same behavior.
lastWebCMDWritePtr is a uint8_t* type and so is lastWebCMDRingstartPtr.
lastWebCMDRingRXLen is a uint8_t type.
__attribute__((__interrupt__))
void UartISR_forWebserver()
{
*(lastWebCMDWritePtr++) = (uint8_t)((&AVR32_USART0)->rhr & 0x1ff);
lastWebCMDRingRXLen++;
if(lastWebCMDWritePtr - lastWebCMDRingstartPtr > lastWebCMDRingBufferSIZE)
{
lastWebCMDWritePtr = lastWebCMDRingstartPtr;
}
// Variation 2:
// advanceFifo((uint8_t)((&AVR32_USART0)->rhr & 0x1ff));
// Variation 3:
// if(usart_read_char(&AVR32_USART0, getReadPointer()) == USART_RX_ERROR)
// {
// usart_reset_status(&AVR32_USART0);
// }
//
};
I welcome any of your ideas and advices.
Regarts Someo
P.S. I put the Atmel studio tag in case this has something to do with the myriad of debugger bugs of AS.
For a complete picture you would have to show where and how lastWebCMDWritePtr, lastWebCMDRingRXLen, lastWebCMDRingstartPtr and lastWebCMDRingBufferSIZE are used elsewhere (on the consuming side)
Also I would first try a simpler ISR with no dependencies to other software modules to exclude a hardware resp. register handling problem.
Approach:
#define USART_DEBUG
#define DEBUG_BUF_SIZE 30
__attribute__((__interrupt__))
void UartISR_forWebserver()
{
uint8_t rec_byte;
#ifdef USART_DEBUG
static volatile uint8_t usart_debug_buf[DEBUG_BUF_SIZE]; //circular buffer for debugging
static volatile int usart_debug_buf_index = 0;
#endif
rec_byte = (uint8_t)((&AVR32_USART0)->rhr & 0x1ff);
#ifdef USART_DEBUG
usart_debug_buf_index = usart_debug_buf_index % DEBUG_BUF_SIZE;
usart_debug_buf[usart_debug_buf_index] = rec_byte;
usart_debug_buf_index++
if (!(usart_debug_buf_index < DEBUG_BUF_SIZE)) {
usart_debug_buf_index = 0; //candidate for a breakpoint to see what happened in the past
}
#endif
//uart_recfifo_enqueue(rec_byte);
};

2D array, prototype function and random numbers [duplicate]

I need a 'good' way to initialize the pseudo-random number generator in C++. I've found an article that states:
In order to generate random-like
numbers, srand is usually initialized
to some distinctive value, like those
related with the execution time. For
example, the value returned by the
function time (declared in header
ctime) is different each second, which
is distinctive enough for most
randoming needs.
Unixtime isn't distinctive enough for my application. What's a better way to initialize this? Bonus points if it's portable, but the code will primarily be running on Linux hosts.
I was thinking of doing some pid/unixtime math to get an int, or possibly reading data from /dev/urandom.
Thanks!
EDIT
Yes, I am actually starting my application multiple times a second and I've run into collisions.
This is what I've used for small command line programs that can be run frequently (multiple times a second):
unsigned long seed = mix(clock(), time(NULL), getpid());
Where mix is:
// Robert Jenkins' 96 bit Mix Function
unsigned long mix(unsigned long a, unsigned long b, unsigned long c)
{
a=a-b; a=a-c; a=a^(c >> 13);
b=b-c; b=b-a; b=b^(a << 8);
c=c-a; c=c-b; c=c^(b >> 13);
a=a-b; a=a-c; a=a^(c >> 12);
b=b-c; b=b-a; b=b^(a << 16);
c=c-a; c=c-b; c=c^(b >> 5);
a=a-b; a=a-c; a=a^(c >> 3);
b=b-c; b=b-a; b=b^(a << 10);
c=c-a; c=c-b; c=c^(b >> 15);
return c;
}
The best answer is to use <random>. If you are using a pre C++11 version, you can look at the Boost random number stuff.
But if we are talking about rand() and srand()
The best simplest way is just to use time():
int main()
{
srand(time(nullptr));
...
}
Be sure to do this at the beginning of your program, and not every time you call rand()!
Side Note:
NOTE: There is a discussion in the comments below about this being insecure (which is true, but ultimately not relevant (read on)). So an alternative is to seed from the random device /dev/random (or some other secure real(er) random number generator). BUT: Don't let this lull you into a false sense of security. This is rand() we are using. Even if you seed it with a brilliantly generated seed it is still predictable (if you have any value you can predict the full sequence of next values). This is only useful for generating "pseudo" random values.
If you want "secure" you should probably be using <random> (Though I would do some more reading on a security informed site). See the answer below as a starting point: https://stackoverflow.com/a/29190957/14065 for a better answer.
Secondary note: Using the random device actually solves the issues with starting multiple copies per second better than my original suggestion below (just not the security issue).
Back to the original story:
Every time you start up, time() will return a unique value (unless you start the application multiple times a second). In 32 bit systems, it will only repeat every 60 years or so.
I know you don't think time is unique enough but I find that hard to believe. But I have been known to be wrong.
If you are starting a lot of copies of your application simultaneously you could use a timer with a finer resolution. But then you run the risk of a shorter time period before the value repeats.
OK, so if you really think you are starting multiple applications a second.
Then use a finer grain on the timer.
int main()
{
struct timeval time;
gettimeofday(&time,NULL);
// microsecond has 1 000 000
// Assuming you did not need quite that accuracy
// Also do not assume the system clock has that accuracy.
srand((time.tv_sec * 1000) + (time.tv_usec / 1000));
// The trouble here is that the seed will repeat every
// 24 days or so.
// If you use 100 (rather than 1000) the seed repeats every 248 days.
// Do not make the MISTAKE of using just the tv_usec
// This will mean your seed repeats every second.
}
if you need a better random number generator, don't use the libc rand. Instead just use something like /dev/random or /dev/urandom directly (read in an int directly from it or something like that).
The only real benefit of the libc rand is that given a seed, it is predictable which helps with debugging.
On windows:
srand(GetTickCount());
provides a better seed than time() since its in milliseconds.
C++11 random_device
If you need reasonable quality then you should not be using rand() in the first place; you should use the <random> library. It provides lots of great functionality like a variety of engines for different quality/size/performance trade-offs, re-entrancy, and pre-defined distributions so you don't end up getting them wrong. It may even provide easy access to non-deterministic random data, (e.g., /dev/random), depending on your implementation.
#include <random>
#include <iostream>
int main() {
std::random_device r;
std::seed_seq seed{r(), r(), r(), r(), r(), r(), r(), r()};
std::mt19937 eng(seed);
std::uniform_int_distribution<> dist{1,100};
for (int i=0; i<50; ++i)
std::cout << dist(eng) << '\n';
}
eng is a source of randomness, here a built-in implementation of mersenne twister. We seed it using random_device, which in any decent implementation will be a non-determanistic RNG, and seed_seq to combine more than 32-bits of random data. For example in libc++ random_device accesses /dev/urandom by default (though you can give it another file to access instead).
Next we create a distribution such that, given a source of randomness, repeated calls to the distribution will produce a uniform distribution of ints from 1 to 100. Then we proceed to using the distribution repeatedly and printing the results.
Best way is to use another pseudorandom number generator.
Mersenne twister (and Wichmann-Hill) is my recommendation.
http://en.wikipedia.org/wiki/Mersenne_twister
i suggest you see unix_random.c file in mozilla code. ( guess it is mozilla/security/freebl/ ...) it should be in freebl library.
there it uses system call info ( like pwd, netstat ....) to generate noise for the random number;it is written to support most of the platforms (which can gain me bonus point :D ).
The real question you must ask yourself is what randomness quality you need.
libc random is a LCG
The quality of randomness will be low whatever input you provide srand with.
If you simply need to make sure that different instances will have different initializations, you can mix process id (getpid), thread id and a timer. Mix the results with xor. Entropy should be sufficient for most applications.
Example :
struct timeb tp;
ftime(&tp);
srand(static_cast<unsigned int>(getpid()) ^
static_cast<unsigned int>(pthread_self()) ^
static_cast<unsigned int >(tp.millitm));
For better random quality, use /dev/urandom. You can make the above code portable in using boost::thread and boost::date_time.
The c++11 version of the top voted post by Jonathan Wright:
#include <ctime>
#include <random>
#include <thread>
...
const auto time_seed = static_cast<size_t>(std::time(0));
const auto clock_seed = static_cast<size_t>(std::clock());
const size_t pid_seed =
std::hash<std::thread::id>()(std::this_thread::get_id());
std::seed_seq seed_value { time_seed, clock_seed, pid_seed };
...
// E.g seeding an engine with the above seed.
std::mt19937 gen;
gen.seed(seed_value);
#include <stdio.h>
#include <sys/time.h>
main()
{
struct timeval tv;
gettimeofday(&tv,NULL);
printf("%d\n", tv.tv_usec);
return 0;
}
tv.tv_usec is in microseconds. This should be acceptable seed.
As long as your program is only running on Linux (and your program is an ELF executable), you are guaranteed that the kernel provides your process with a unique random seed in the ELF aux vector. The kernel gives you 16 random bytes, different for each process, which you can get with getauxval(AT_RANDOM). To use these for srand, use just an int of them, as such:
#include <sys/auxv.h>
void initrand(void)
{
unsigned int *seed;
seed = (unsigned int *)getauxval(AT_RANDOM);
srand(*seed);
}
It may be possible that this also translates to other ELF-based systems. I'm not sure what aux values are implemented on systems other than Linux.
Suppose you have a function with a signature like:
int foo(char *p);
An excellent source of entropy for a random seed is a hash of the following:
Full result of clock_gettime (seconds and nanoseconds) without throwing away the low bits - they're the most valuable.
The value of p, cast to uintptr_t.
The address of p, cast to uintptr_t.
At least the third, and possibly also the second, derive entropy from the system's ASLR, if available (the initial stack address, and thus current stack address, is somewhat random).
I would also avoid using rand/srand entirely, both for the sake of not touching global state, and so you can have more control over the PRNG that's used. But the above procedure is a good (and fairly portable) way to get some decent entropy without a lot of work, regardless of what PRNG you use.
For those using Visual Studio here's yet another way:
#include "stdafx.h"
#include <time.h>
#include <windows.h>
const __int64 DELTA_EPOCH_IN_MICROSECS= 11644473600000000;
struct timezone2
{
__int32 tz_minuteswest; /* minutes W of Greenwich */
bool tz_dsttime; /* type of dst correction */
};
struct timeval2 {
__int32 tv_sec; /* seconds */
__int32 tv_usec; /* microseconds */
};
int gettimeofday(struct timeval2 *tv/*in*/, struct timezone2 *tz/*in*/)
{
FILETIME ft;
__int64 tmpres = 0;
TIME_ZONE_INFORMATION tz_winapi;
int rez = 0;
ZeroMemory(&ft, sizeof(ft));
ZeroMemory(&tz_winapi, sizeof(tz_winapi));
GetSystemTimeAsFileTime(&ft);
tmpres = ft.dwHighDateTime;
tmpres <<= 32;
tmpres |= ft.dwLowDateTime;
/*converting file time to unix epoch*/
tmpres /= 10; /*convert into microseconds*/
tmpres -= DELTA_EPOCH_IN_MICROSECS;
tv->tv_sec = (__int32)(tmpres * 0.000001);
tv->tv_usec = (tmpres % 1000000);
//_tzset(),don't work properly, so we use GetTimeZoneInformation
rez = GetTimeZoneInformation(&tz_winapi);
tz->tz_dsttime = (rez == 2) ? true : false;
tz->tz_minuteswest = tz_winapi.Bias + ((rez == 2) ? tz_winapi.DaylightBias : 0);
return 0;
}
int main(int argc, char** argv) {
struct timeval2 tv;
struct timezone2 tz;
ZeroMemory(&tv, sizeof(tv));
ZeroMemory(&tz, sizeof(tz));
gettimeofday(&tv, &tz);
unsigned long seed = tv.tv_sec ^ (tv.tv_usec << 12);
srand(seed);
}
Maybe a bit overkill but works well for quick intervals. gettimeofday function found here.
Edit: upon further investigation rand_s might be a good alternative for Visual Studio, it's not just a safe rand(), it's totally different and doesn't use the seed from srand. I had presumed it was almost identical to rand just "safer".
To use rand_s just don't forget to #define _CRT_RAND_S before stdlib.h is included.
Assuming that the randomness of srand() + rand() is enough for your purposes, the trick is in selecting the best seed for srand. time(NULL) is a good starting point, but you'll run into problems if you start more than one instance of the program within the same second. Adding the pid (process id) is an improvement as different instances will get different pids. I would multiply the pid by a factor to spread them more.
But let's say you are using this for some embedded device and you have several in the same network. If they are all powered at once and you are launching the several instances of your program automatically at boot time, they may still get the same time and pid and all the devices will generate the same sequence of "random" numbers. In that case, you may want to add some unique identifier of each device (like the CPU serial number).
The proposed initialization would then be:
srand(time(NULL) + 1000 * getpid() + (uint) getCpuSerialNumber());
In a Linux machine (at least in the Raspberry Pi where I tested this), you can implement the following function to get the CPU Serial Number:
// Gets the CPU Serial Number as a 64 bit unsigned int. Returns 0 if not found.
uint64_t getCpuSerialNumber() {
FILE *f = fopen("/proc/cpuinfo", "r");
if (!f) {
return 0;
}
char line[256];
uint64_t serial = 0;
while (fgets(line, 256, f)) {
if (strncmp(line, "Serial", 6) == 0) {
serial = strtoull(strchr(line, ':') + 2, NULL, 16);
}
}
fclose(f);
return serial;
}
Include the header at the top of your program, and write:
srand(time(NULL));
In your program before you declare your random number. Here is an example of a program that prints a random number between one and ten:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
//Initialize srand
srand(time(NULL));
//Create random number
int n = rand() % 10 + 1;
//Print the number
cout << n << endl; //End the line
//The main function is an int, so it must return a value
return 0;
}

Fast non blocking keyboard IO in C under MinGW

I have written a CPU emulator in C on windows for fun, and I want it to handle its own IO in a non-blocking fashion: if there has been a keypress, return the char value of that keypress, else return 0.
At the moment I am using the following:
#include <conio.h>
...
unsigned int input(){
unsigned int input_data;
if (_kbhit()){
input_data = (unsigned int)_getch();
}
else{
input_data = 0;
}
return input_data;
}
And in terms of function, it is fine. The one problem I have is that it is very detrimental to the speed of the emulator - the emulator can go from 60-100 million instructions per second to the scale of tens or hundreds of thousands, just by running programs with lots of IO instructions. Is there a faster way to do this, whilst still keeping the same functionality?
Two options comes to my mind:
The first option is the easiest one. Do not check it every time. OS calls are expensive, and if your emulator calls this very often, it will slow everything down.
#include <conio.h>
...
unsigned int input(){
static int cheat = 0;
cheat = (cheat + 1) % 128;
if (cheat){
return 0;
}
unsigned int input_data;
if (_kbhit()){
input_data = (unsigned int)_getch();
}
else{
input_data = 0;
}
return input_data;
}
Second option is to receive the actual keyboard input async and store the input data into a buffer. And your input() function checks this buffer. This removes the call to the OS all together in the tight loop.

Resources