Due to the current chip shortage, I have had to purchase PIC microcontrollers that have a different specification to what was initially designed.
Initial: PIC24FJ256GA606
Revision 1: PIC24FJ512GA606
Revision 2: PIC24FJ1024GA606
In this instance, the microcontrollers are within the same family but have different size of memory.
Initially, the binary was created to support multiple product variants and they all use this microcontroller (using hardware pins to define the type of product and thus the software features it supports). I would like to continue with a single binary but to be able to support the different microcontrollers specified above.
We flash the microcontrollers using a PICKIT 4 during manufacturing.
A custom bootloader is also flashed onto the microcontroller during manufacturing to allow the firmware update procedure to be is driven by another PIC microcontroller out in the field (it's a distributed system connected by RS-485).
I use MPLAB X IDE for development and buildings production binaries.
I guess the key question is about if this is even possible?
If so, then how would I achieve creating the single binary that supports multiple processors?
Normally a single binary should only correspond to the specific controller. Because especially Microchip has really wide variaty of microcontrollers. But as you mentioned in your question:
In this instance, the microcontrollers are within the same family but have different size of memory.
You can slightly use the same binary as long as you select the hardware very carefully. I mean if those 3 different models has the same pin mapping but some has less or some has more, then you would select the common corresponding pins for the I/O functions wherever possible. Since those devices are from the same family they must have common IO pins with the same port and pin numbering.
If those similarities including of that the internal registers are enough for the functionality of your system, you can use the same binary for those 3 or more devices as long as you select the right hardware very carefully and none of the functions remain without touching its hardware.
But it is very hard to say the same for the others that are not belong to a series in the same family. In this case you can check the hardware similarities for each functionality of your system. If that micro provides the same hardware, then you can go and firstly give it a try to see whether it will be programmed and then it will funtion in the same way. After making sure enough you can add that model in your usable models list, too.
Hope this give you a helpful idea.
For two microcontrollers to have compatible binaries, they need to fulfil the following:
The CPU cores must have identical Instruction Set Architecture. Be aware that the term "code compatible" by the manufacturer might only mean that two parts have the same ISA and are compatible on the assembly language level, as long as no peripherals or memories are used...
In case they have different memory sizes, the part with larger memory must be a superset of the part with smaller memory and they must map memory to the same addresses.
All hardware peripherals used must be identical and any peripheral routing registers used must also be identical. Please note that the very same core of the same family but with a different package and pin routing might mean that peripheral routing registers must be set differently.
There can be no check of MCU partnumber registers etc inside the firmware, nor can there be any in the flash programming equipment.
In general this means that the MCUs must be of the very same family and the manufacturer must guarantee that they are replaceable.
You can very likely switch between different temperature spec parts of the same model and between automotive/non-automotive qual parts of the same model without changing the code.
What is the CPU_STATE_MAX macro and what is it used for? I couldn't find any descriptive/relevant google results or man pages.
I am on Mac OSX Snowleopard, if that makes any difference.
See this--it corresponds to the number of CPU states defined in the machine.h header. These different states are then used to index different pieces of information about the CPU state, which can differ by CPU state--idle, 'nice', etc.
I know there are some unix utils for simple architecture queries:
arch
nproc
lsb_release -a
are there any simple ways to find out about the cluster/supercomputer/nodes - like to find out the number of teraflops of the machine and so on?
Yes and no.
No you won't be able to find the effective number of flops the cluster is able to deliver in practice; you need a benchmark for that, such as HPL, the one used in the Top500 ranking. The value given by the benchmark will depend on the power of the processors, the speed of the memory, the latency of the network, etc.
But yes you will be able to compute the maximum theoretical power (in FLOPS) of one node from the contents of its /proc/cpuinfo, based on the processor family and frequency, and on the number of physical cores. See formulas here.
Short answer: no.
Slightly longer answer: no. You have to run benchmarks to measure those. The information should be available from the owners/administrators of the supercomputer in question.
No standard way - most such clusters/supercomputers/nodes are custom built, and the administrators may have added tools to determine current and available usage such as number of fee nodes, but simply having a tool to return such a number wouldn't be very useful, practically.
The only way to actually get the number is to measure it, and there are several different methods of approaching this. It may have been measured for the system you are using, you can presumably ask the administrators if it has been, but otherwise it's just probably a matter of "Do we have enough processing power" rather than shooting for some numerical target.
I need to know the effect of different platforms on the System.Random object (Silverlight). Is the sequence created the same on Mac, PC and across 32 / 64 bit?
Excuse my "stupid" answer, but to my mind, Random numbers should be always considered random and thus the created sequences should be handled as NOT same across any "domain". I know that the .NET (or Silverlight) random number generators use a Pseudo-random algorithm depending on the seed value and will generate the same number sequence when using the same seed value, but I just wouldn't rely on this fact.
It seems that you have some kind of "expectation" when you need to have random numbers synchronized across several platforms, and using a Random Number generator for expected value sequences looks weird to me.
If you can tell us more about your use case, maybe we can find another more solid solution?
Just my opinion.
The algorithm to generate the random numbers is encoded into the runtime. Hence regardless of the platform you should see the same set of "random" numbers for a given seed value.
The extact behaviour of default constructor for Random (where the seed value is time based) may vary slightly from platform to platform. For example rapid creation of instances of Random may create some instances that generate the same sequence, the distribution of these "duplicates" may vary on all sorts of conditions including the platform.
Okay, I guess this is entirely subjective and whatnot, but I was thinking about entropy sources for random number generators. It goes that most generators are seeded with the current time, correct? Well, I was curious as to what other sources could be used to generate perfectly valid, random (The loose definition) numbers.
Would using multiple sources (Such as time + current HDD seek time [We're being fantastical here]) together create a "more random" number than a single source? What are the logical limits of the amount of sources? How much is really enough? Is the time chosen simply because it is convenient?
Excuse me if this sort of thing is not allowed, but I'm curious as to the theory behind the sources.
The Wikipedia article on Hardware random number generator's lists a couple of interesting sources for random numbers using physical properties.
My favorites:
A nuclear decay radiation source detected by a Geiger counter attached to a PC.
Photons travelling through a semi-transparent mirror. The mutually exclusive events (reflection — transmission) are detected and associated to "0" or "1" bit values respectively.
Thermal noise from a resistor, amplified to provide a random voltage source.
Avalanche noise generated from an avalanche diode. (How cool is that?)
Atmospheric noise, detected by a radio receiver attached to a PC
The problems section of the Wikipedia article also describes the fragility of a lot of these sources/sensors. Sensors almost always produce decreasingly random numbers as they age/degrade. These physical sources should be constantly checked by statistical tests which can analyze the generated data, ensuring the instruments haven't broken silently.
SGI once used photos of a lava lamp at various "glob phases" as the source for entropy, which eventually evolved into an open source random number generator called LavaRnd.
I use Random.ORG, they provide free random data from Atmospheric noise, that I use to periodically re-seed a Mersene-Twister RNG. Its about as random as you can get with no hardware dependencies.
Don't worry about a "good" seed for a random number generator. The statistical properties of the sequence do not depend on how the generator is seeded. There are other things, however. to worry about. See Pitfalls in Random Number Generation.
As for hardware random number generators, these physical sources have to be measured, and the measurement process has systematic errors. You might find "pseudo" random numbers to have higher quality than "real" random numbers.
Linux kernel uses device interrupt timing (mouse, keyboard, hard drives) to generate entropy. There's a nice article on Wikipedia on entropy.
Modern RNGs are both checked against correlations in nearby seeds and run several hundred iterations after the seeding. So, the unfortunately boring but true answer is that it really doesn't matter very much.
Generally speaking, using random physical processes have to be checked that they conform to a uniform distribution and are otherwise detrended.
In my opinion, it's often better to use a very well understood pseudo-random number generator.
I've used an encryption program that used the users mouse movement to generate random numbers. The only problem was that the program had to pause and ask the user to move the mouse around randomly for a few seconds to work properly which might not always be practical.
I found HotBits several years ago - the numbers are generated from radioactive decay, genuinely random numbers.
There are limits on how many numbers you can download a day, but it has always amused me to use these as really, really random seeds for RNG.
Some TPM (Trusted Platform Module) "chips" have a hardware RNG. Unfortunately, the (Broadcom) TPM in my Dell laptop lacks this feature, but many computers sold today come with a hardware RNG that uses truly unpredictable quantum mechanical processes. Intel has implemented the thermal noise variety.
Also, don't use the current time alone to seed an RNG for cryptographic purposes, or any application where unpredictability is important. Using a few low order bits from the time in conjunction with several other sources is probably okay.
A similar question may be useful to you.
Sorry I'm late to this discussion (what is it 3 1/2 years old now?), but I've a rekindled interest in PRN generation and alternate sources of entropy. Linux kernel developer Rusty Russell recently had a discussion on his blog on alternate sources of entropy (other than /dev/urandom).
But, I'm not all that impressed with his choices; a NIC's MAC address never changes (although it is unique from all others), and PID seems like too small a possible sample size.
I've dabbled with a Mersenne Twister (on my Linux box) which is seeded with the following algorithm. I'm asking for any comments/feedback if anyone's willing and interested:
Create an array buffer of 64 bits + 256 bits * number of /proc files below.
Place the time stamp counter (TSC) value in the first 64 bits of this buffer.
For each of the following /proc files, calculate the SHA256 sum:
/proc/meminfo
/proc/self/maps
/proc/self/smaps
/proc/interrupts
/proc/diskstats
/proc/self/stat
Place each 256-bit hash value into its own area of the array created in (1).
Create a SHA256 hash of this entire buffer. NOTE: I could (and probably should) use a different hash function completely independent of the SHA functions - this technique has been proposed as a "safeguard" against weak hash functions.
Now I have 256 bits of HOPEFULLY random (enough) entropy data to seed my Mersenne Twister. I use the above to populate the beginning of the MT Array (624 32-bit integers), and then initialize the remainder of that array with the MT author's code. Also, I could use a different hash function (e.g. SHA384, SHA512), but I'd need a different size array buffer (obviously).
The original Mersenne Twister code called for one single 32-bit seed, but I feel that's horribly inadequate. Running "merely" 2^32-1 different MTs in search of breaking the crypto is not beyond the realm of practical possibility in this day and age.
I'd love to read anyone's feedback on this. Criticism is more than welcome. I will defend my use of the /proc files as above because they're constantly changing (especially the /proc/self/* files, and the TSC always yields a different value (nanosecond [or better] resolution, IIRC). I've run Diehard tests on this (to the tune of several hundred billion bits), and it seems to be passing with flying colors. But that's probably more testament to the soundness of the Mersenne Twister as a PRNG than to how I'm seeding it.
Of course, these aren't totally impervious to someone hacking them, but I just don't see all of these (and SHA*) being hacked and broken to in my lifetime.
Some use keyboard input (timeouts between keystrokes), I heard of I think in a novel that radio static reception can be used - but of course that requires other hardware and software...
Noise on top of the Cosmic Microwave Background spectrum. Of course you must first remove some anisotropy, foreground objects, correlated detector noise, galaxy and local group velocities, polarizations etc. Many pitfalls remain.
Source of seed isn't that much important. More important is the pseudo numbers generator algorithm. However I've heard some time ago about generating seed for some bank operations. They took many factors together:
time
processor temperature
fan speed
cpu voltage
I don't remember more :)
Even if some of these parameters doesn't change much in time, you can put them into some good hashing function.
How to generate good random number?
Maybe we can take into account inifinite number of universes? If this is true, that all the time new parallel universes are being created, we can do something like this:
int Random() {
return Universe.object_id % MAX_INT;
}
In every moment we should be on another branch of parallel universes, so we should have different id. The only problem is how to get Universe object :)
How about spinning off a thread that will manipulate some variable in a tight loop for a fixed amount of time before it is killed. What you end up with will depend on the processor speed, system load, etc... Very hokey, but better than just srand(time(NULL))...
Don't worry about a "good" seed for a random number generator. The statistical properties of the sequence do not depend on how the generator is seeded.
I disagree with John D. Cook's advice. If you seed the Mersenne Twister with all bits set to zero except one, it will initially generate numbers which are anything but random. It takes a long time for the generator to churn this state into anything that would pass statistical tests. Simply setting the first 32 bits of the generator to a seed will have a similar effect. Also, if the entire state is set to zero the generator will produce endless zeroes.
Properly written RNG code will have a properly written seeding algorithm that accepts say a 64 bit value and seeds the generator so it will produce decent random numbers for each possible input. So if you are using a reliable library then any seed will do. But if you hack together your own implementation then you need to be careful.