How does the following code make PC beeps? - c

void Sound(int f)
{
USHORT B=1193180/f;
UCHAR temp = In_8(0x61);
temp = temp | 3;
Out_8(0x61,temp);
Out_8(0x43,0xB6);
Out_8(0x42,B&0xF);
Out_8(0x42,(B>>8)&0xF);
}
In_8/Out_8 reads/writes 8 bit to/from a specified port(implementation details omitted).
How does it make PC beep?
UPDATE
Why &0xF is used here? Shouldn't it be 0xFF?

The PC has a 8255 timer chip, which is controlled using the ports 0x61, 0x43 and 0x42.
When port 0x61 bit 0 is set to 1, this means "turn on the timer that is connected to the speaker".
When port 0x61 bit 1 is set to 1, this means "turn on the speaker".
This is done in the first paragraph of your code.
The second part puts the "magic value" 0xB6 on port 0x43, which means that the following two bytes arriving at port 0x42 will be interpreted as the divisor for the timer frequency. The division's resulting frequency (1193180 / divisor) will then be sent to the speaker.
http://gd.tuwien.ac.at/languages/c/programming-bbrown/advcw3.htm#sound

Related

STMF0 CRC Issue

I am using the STM32F0 using register level coding and am having problems with the CRC module.
Basically I cant get the results to agree with online calculators.
I've stripped it right back to as simple as possible.
If I just reset the CRC then read the Data Register out I get 0xFFFFFFFF which I would expect as that's the initial value.
Even if I write zero in though and get the result it does not agree with other tools.
The STM outputs 0xC704DD7B and the online tools give 0xF4DBDF21.
As far as I can see all the parameters are the same (I have not tried to hand calculate it!).
My bare bones code is (and I am reading the result in the debugger from the register)...
// Reset the CRC.
SET_BIT(CRC->CR, CRC_CR_RESET_Pos);
// Write 0.
CRC->DR, 0;
I am not really sure, but maybe this helps:
I once had the same problem and i tried to figure out how to get the "correct" CRC32. Unfortunately there is not "one" type how CRC32 could be calculated, but several of ways. See https://crccalc.com/
I allways leave the settings of the CRC peripheral on default:
Default Polynomial state -> Enable
Default Init Value State -> Enable
Enable Input Data Inversion Mode -> None
None Output Data Inversion Mode -> Disable
Except "Input Data Format", which I set to "Words".
When sending data to the peripheral, I revert the words "word-wise". The Result is reverted word-wise again. This leads to an CRC32 which can be verified as CRC32/MGPE2
I have a function, that tests, if If I have configured the CRC Peripheral correctly, so I get "the correct" CRC32/MPEG2:
uint8_t CRC32Test(void) {
// #brief test if CRC32 Module is configured correctly
// #params none, void
// #return u8 status: 1 = OK, 0 = NOK (not configured correctly)
// Test if CRC Module is configured correctly.
// If YES, these data must return the CRC32 0x07 D4 12 72 (Big Endian)
// or - 0x72 12 d4 07 (little Endian)
uint8_t retval = 0;
uint8_t testdata[] = {0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x00, 0x00};
uint32_t CRCdataU32[3] = {0,};
uint32_t* pCRCdata = (uint32_t*)testdata;
uint32_t dataSz = 3;
CRCdataU32[0] = __REV(*pCRCdata++);
CRCdataU32[1] = __REV(*pCRCdata++);
CRCdataU32[2] = __REV(*pCRCdata++);
uint32_t testCRC32 = HAL_CRC_Calculate(&hcrc, CRCdataU32, dataSz);
testCRC32 = __REV(testCRC32);
if(testCRC32 == 0x7212d407) retval = 1;
return(retval);
}
I verified this, using crccalc.com
This is most probably not the most elegant code, but it works for me. I use it for data transfer between the MCU and a PC over RS232/RS485. I don't care much which special CRC32 I use. I just need both to create the same results on the receiver and the sender. And I archieve that with that code.

Clocking down 16bit wide DRAM with 32bit MCU when strange issue is encountered

I am working on custom board containing a 32bit MCU (Cortex A5) and a 16bit wide DRAM chip (LPDDR2). The MCU has an on-board DRAM controller which supports both DDR3 and LPDDR2, and I do have a working setup using LPDDR2.
Now, I am trying to half the clock rate at boot time on both MCU and DRAM (they both use the same PLL) due to power-restrictions, and this is where my troubles begin.
As mentioned, I do have a working setup using the full frequency (DRAM: 400MHz, MCU: 396MHz), so one would expect that halving the frequency and updating the timings according to the DRAM datasheet should yeld another working setup, but no.
The DRAM init runs at boot time from MCU intram, so does any tests. The whole procedure is handled by a board-specific version of U-Boot 2015.04.
I have a collection of tests that run at MCU boot to verify DRAM integrity. One of these tests is a so-called "walking bit"-test, where I use a 32bit uint, a toggle each bit in sequence, reading back to verify.
What I found was that, when reading back, the lower 16 bits have not been touched, while the upper 16 bits seems altered. After some investigation, I found the following pattern (assuming a watermark "0xaa"):
write -> readback
0x8000_0000 -> 0x0000_aaaa
0x4000_0000 -> 0x0000_aaaa
0x2000_0000 -> 0x0000_aaaa
0x1000_0000 -> 0x0000_aaaa
[...]
0x0008_0000 -> 0x0000_aaaa
0x0004_0000 -> 0x0000_aaaa
0x0002_0000 -> 0x0000_aaaa
0x0001_0000 -> 0x0000_aaaa
0x0000_8000 -> 0x8000_aaaa
0x0000_4000 -> 0x4000_aaaa
0x0000_2000 -> 0x2000_aaaa
0x0000_1000 -> 0x1000_aaaa
[...]
0x0000_0008 -> 0x0008_aaaa
0x0000_0004 -> 0x0004_aaaa
0x0000_0002 -> 0x0002_aaaa
0x0000_0001 -> 0x0001_aaaa
The watermark is present, although I suspect it got there from a previous debugging-session. This I will address later, hence my primary focus at the moment is getting the "walking bit"-test to pass.
Here is a memory dump:
(gdb) x/16b addr
0x80000000: 0x00 0x00 0x55 0x55 0x55 0x55 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
(gdb) p/x *addr
$59 = 0x55550000
(gdb) set *addr = 0xaabbccdd
(gdb) p/x *addr
$60 = 0xccdd0000
(gdb) x/16b addr
0x80000000: 0x00 0x00 0xdd 0xcc 0xbb 0xaa 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
Can anyone tell my what might cause this type of behaviour?
Cheers
Note: I have intentionally left out MCU and DRAM specifications, as I believe that the question can be addressed only with JEDEC/DFI in mind.
Edit: Added memory dump.
Edit: Here is the source of the "walking bit"-test. Run from MCU intram on memory area located on DRAM. Assumed bug-free:
static u32 __memtest_databus(volatile u32 * const addr)
{
/* Walking bit */
u32 pattern = (1u << 31);
u32 failmask = 0;
for(; pattern; pattern >>= 1)
{
*addr = pattern;
if(*addr != pattern)
failmask |= pattern;
}
return failmask;
}
Edit: The PLL and VCO has been checked, and settings are correct. PLL is stable and DRAM PHY does obtain a lock.
Link to DRAM Data Sheet
the bytes look like they have shifted, not altered.
quote
(gdb) x/16b addr
0x80000000: 0x00 0x00 *0xdd 0xcc 0xbb 0xaa* 0x00 0x80
0x80000008: 0xaa 0xaa 0xaa 0xaa 0xaa 0xaa 0x00 0x55
unquote
You have one severe bug here: u32 pattern = (1 << 31);.
The integer constant 1 is of type int, which is 32 bits on your ARM system.
You left shift this signed number out of bounds and invoke undefined behavior; anything could happen. The variable pattern can get any value.
Correct code would be u32 pattern = (u32)1 << 31; or u32 pattern = 1u << 31;

Modbus implementation in C for embedded system lpcxpresso

I am new to modbus and I have to program a lpcxpresso baseboard as a master to collect readings from a powermeter using RS485 Modbus protocol.
I am familiar with the protocol (about the PDU ADU frame, function codes, master-slave) through reading of specifications from modbus.org.
However I have difficulties in the implementation when writing the code in C.
So my questions are:
Do I have to open connection, set the baud rate, etc when I am starting the connection?
I am thinking to send the frame as byte[]. Is this correct? or are there any other ways to do it?
Does the data send have to be in hexadecimal, or binary or integer?
CRC generation and checking.
I will really appreciate all kind of help and assistance :) Sorry if the questions are not very specific or too basic
Step 1: Forget about energy meter and modbus for now. Most important thing is to get hardware working. RS485 is simply a serial port. Read manual on how to initialize serial port on your hardware, and send single byte to your PC and back. Then send hundreds of bytes to PC and back.
Setp 2: Get timer on your hardware working also. Modbus protocol has some requirements on timing so you'll need it too.
Step 3: Get modbus specification. It will explain protocol format and checksums. Use modbus library or write your own. Make sure you can make it work with PC, before you move on to the energy meter.
Step 4: If you have a problem, ask specific question about it on SO.
First of all: is ModBus RTu or ASCII?
Yes, of course. You need to specify all things as specs describe.
Yes, it is a unsigned char[]. The structure is described by specs.
The question doesn't make sense: You always send info as "memory
dump", but with RTU you send 1 byte per memory byte, in case of
ASCII you send 2 byte per memory byte. Eg. if you have to send a
byte 0xAE: RTU=0xAE - ASCII= 0x41 0x45. In case of RTU, if you have to send an int (4 byte) you will send those bytes as they are stored in memory, eg: 12345 will be sent as 0x00 0x00 0x30 0x39 (big endian), 0x39 0x30 0x00 0x00 (little endian).
The calculation of CRC is explained in specs. Below the code of my old CBuilder component
unsigned short TLPsComPort::Calculate_CRC16 ( int Message_Length, char *Message
{
char Low_CRC;
char Bit;
// Constant of ModBus protocol
unsigned short CONSTANT = 0xA001;
unsigned short CRC_REGISTER = 0xFFFF;
for (int i=0; i<Message_Length; i++)
{
Low_CRC = CRC_REGISTER;
Low_CRC = *(Message+i) ^ Low_CRC;
CRC_REGISTER = ((CRC_REGISTER & 0xFF00) | (Low_CRC & 0x00FF));
for (int j=0; j<8;j++)
{
Bit = CRC_REGISTER & 0x0001;
CRC_REGISTER = (CRC_REGISTER >> 1) & 0x7FFF;
if (Bit) CRC_REGISTER = CRC_REGISTER ^ CONSTANT;
}
}
return CRC_REGISTER;
}

What is IOSET and how works the simple buzzer (BUZZER_PIN)?

I have the c code for the micro-controller with buzzer. It works, but I wonder how it works.
In wh.h/.cpp I have the function:
void setBuzzer(tBool on)
{
if (TRUE == on)
IOCLR = BUZZER_PIN;
else
IOSET = BUZZER_PIN;
}
That can enable and disable the buzzer. I don't know what it really makes and what BUZZER_PIN, IOCLR and IOSET are?
The BUZZER_PIN occurs only once more in the code, in:
void immediateIoInit(void)
{
tU8 initCommand[] = {0x12, 0x97, 0x80, 0x00, 0x40, 0x00, 0x14, 0x00, 0x00};
// 04 = LCD_RST# low
// 10 = BT_RST# low
//make all key signals as inputs
IODIR &= ~(KEYPIN_CENTER | KEYPIN_UP | KEYPIN_DOWN | KEYPIN_LEFT | KEYPIN_RIGHT);
IODIR |= BUZZER_PIN;
IOSET = BUZZER_PIN;
IODIR |= BACKLIGHT_PIN;
IOSET = BACKLIGHT_PIN;
It looks strange to me, because the IOSET value is changing just after setting it to BUZZER_PIN. So, what it can that way does?
One more question: can I do something more with buzzer? E.g. change to volume? Of course, the duration of sound can be adjusted with setBuzzer(1) than pause(time) and setBuzzer(0).
Somewhere you will find an include file that contains the #define for IOSET IOCLR etc.
Usually, they map to GPIO register addresses, eg:
#define FIO0DIR (*(volatile unsigned long *)0x3FFFC000)
IOSET is usually a writeable address that is hardware-capable of setting to 1 all bits written to it that are 1, while leaving the remainder of the GPIO bits in their previous state. This eliminates the need for a read/modify/write operation and so is much more interrupt/thread friendly. It normally has a similar 'IOCLR' partner that can clear the bits on the GPIO port that are set in its argument without affecting the state of others.
The port register itself is probabaly called' IOPIN', or something like that. Modifying one or subsets of bits directly using IOPIN requires a read/modify/write :(
It appears that the buzzer is connected to one GPIO pin, so you can only turn it on and off - no finer control is possible.

How are my bytes in C stored?

First, I'm a student still. So I am not very experienced.
I'm working with a piece of bluetooth hardware and I am using its protocol to send it commands. The protocol requires packets to be sent with LSB first for each packet field.
I was getting error packets back to me indicating my CRC values were wrong so I did some investigating. I found the problem, but I became confused in the process.
Here is Some GDB output and other information elucidating my confusion.
I'm sending a packet that should look like this:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
Here is some GDB output:
print /x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x5942, end_flag = 0xfd}
reqId_ep is the variable name of the packet I'm sending. It looks all good there, but I am receiving the CRC error codes from it so something must be wrong.
Here I examine 9 bytes in hex starting from the address of my packet to send:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x42 0x59 0xfd
And here the problem becomes apparent. The CRC is not LSB first. (0x42 0x59)
To fix my problem I removed the htons() that I set my CRC value equal with.
And here is the same output above without htons():
p/x reqId_ep
$1 = {start_flag = 0xfc, data = {packet_num = 0x1, command = {0x0, 0x8}, payload = {
0x0, 0x5}}, crc = 0x4259, end_flag = 0xfd}
Here the CRC value is not LSB.
But then:
x/9bx 0x7fffffffdee0
0xfc 0x01 0x00 0x08 0x00 0x05 0x59 0x42 0xfd
Here the CRC value is LSB first.
So apparently the storing of C is LSB first? Can someone please cast a light of knowledge upon me for this situation? Thank you kindly.
This has to do with Endianness in computing:
http://en.wikipedia.org/wiki/Endianness#Endianness_and_operating_systems_on_architectures
For example, the value 4660 (base-ten) is 0x1234 in hex. On a Big Endian system, it would be stored in memory as 1234 while on a Little Endian system it would be stored as 3412
If you want to avoid this sort of issue in the future, it might just be easiest to create a large array or struct of unsigned char, and store individual values in it.
eg:
|Start Flag| Packet Num | Command | Payload | CRC | End Flag|
0xfc 0x1 0x0 0x8 0x0 0x5 0x59 0x42 0xfd
typedef struct packet {
unsigned char startFlag;
unsigned char packetNum;
unsigned char commandMSB;
unsigned char commandLSB;
unsigned char payloadMSB;
unsigned char payloadLSB;
unsigned char crcMSB;
unsigned char crcLSB;
unsigned char endFlag;
} packet_t;
You could then create a function that you compile differently based on the type of system you are building for using preprocessor macros.
eg:
/* Uncomment the line below if you are using a little endian system;
/* otherwise, leave it commented
*/
//#define LITTLE_ENDIAN_SYSTEM
// Function protocol
void writeCommand(int cmd);
//Function definition
void writeCommand(int cmd, packet_t* pkt)
{
if(!pkt)
{
printf("Error, invalid pointer!");
return;
}
#if LITTLE_ENDIAN_SYSTEM
pkt->commandMSB = (cmd && 0xFF00) >> 8;
pkt->commandLSB = (cmd && 0x00FF);
# else // Big Endian system
pkt->commandMSB = (cmd && 0x00FF);
pkt->commandLSB = (cmd && 0xFF00) >> 8;
#endif
// Done
}
int main void()
{
packet_t myPacket = {0}; //Initialize so it is zeroed out
writeCommand(0x1234,&myPacket);
return 0;
}
One final note: avoid sending structs as a stream of data, send it's individual elements one-at-a-time instead! ie: don't assume that the struct is stored internally in this case like a giant array of unsigned characters. There are things that the compiler and system put in place like packing and allignment, and the struct could actually be larger than 9 x sizeof(unsigned char).
Good luck!
This is architecture dependent based on which processor you're targeting. There are what is known as "Big Endian" systems, which store the most significant byte of a word first, and "Little Endian" systems that store the least significant byte first. It looks like you're looking at a Little Endian system there.

Resources