Every 2 values in this array represent 16 pixels. (8 binary values per element)
GLubyte character[24] = {
0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00,
0xff, 0x00, 0xff, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00,
0xff, 0xc0, 0xff, 0xc0
};
and this is my code to render my bitmap.
void init(){
glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
}
void render(){
glBitmap(8, 12, 0.0, 11.0, 0.0, 0.0, character);
}
but when I change glBitmap(8, etc.) to glBitmap(10, etc.) , it doesn't work.
to make it work, I need to change,
glPixelStorei(GL_UNPACK_ALIGNMENT, 2);
to
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
My problem is, I have no idea why this works...
I just know that
GL_UNPACK_ALIGNMENT, 1
tells OpenGL to just go to next address without alignment.
I don't see any relationship between setting ALGINMENT to 1 and my bitmap's length.
Could somebody explain what's going on?
After going back to some historical spec documents (glBitmap() is a very obsolete call), the alignment rule for bitmaps is (page 136 of the OpenGL 2.1 spec):
k = a * ceiling(w / (8 * a))
Where:
w is the width, under the assumption that GL_UNPACK_ROW_LENGTH is not set.
a is the value of GL_UNPACK_ALIGNMENT.
k is the number of bytes used per row. Note that each row will always start on at least a byte boundary, no matter how the parameters are set.
Substituting the values from your example, for w = 8, we get:
1 byte per row with GL_UNPACK_ALIGNMENT of 1.
2 bytes per row with GL_UNPACK_ALIGNMENT of 2.
and for w = 10, we get:
2 bytes per row with GL_UNPACK_ALIGNMENT of 1.
2 bytes per row with GL_UNPACK_ALIGNMENT of 2.
Based on this, unless you also have other GL_UNPACK_* parameters set, you should get the same output for width 10 no matter if GL_UNPACK_ALIGNMENT is 1 or 2. If this is not the case, this looks like a bug in the OpenGL implementation.
Related
I currently have this HID report descriptor:
static
unsigned char hid_report_descriptor[] __attribute__ ((aligned(64))) = {
0x05, 0x01, // Usage Page (Generic Desktop Ctrls)
0x09, 0x05, // Usage (Game Pad)
0xA1, 0x01, // Collection (Application)
0xA1, 0x00, // Collection (Physical)
0x85, 0x01, // Report ID (1)
0x05, 0x09, // Usage Page (Button)
0x19, 0x01, // Usage Minimum (0x01)
0x29, 0x10, // Usage Maximum (0x10)
0x15, 0x00, // Logical Minimum (0)
0x25, 0x01, // Logical Maximum (1)
0x95, 0x10, // Report Count (16)
0x75, 0x01, // Report Size (1)
0x81, 0x02, // Input (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position)
0x05, 0x01, // Usage Page (Generic Desktop Ctrls)
0x09, 0x30, // Usage (X)
0x09, 0x31, // Usage (Y)
0x09, 0x32, // Usage (Z)
0x09, 0x33, // Usage (Rx)
0x15, 0x81, // Logical Minimum (-127)
0x25, 0x7F, // Logical Maximum (127)
0x75, 0x08, // Report Size (8)
0x95, 0x04, // Report Count (4)
0x81, 0x02, // Input (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position)
0xC0, // End Collection
0xC0, // End Collection
};
It corresponds to this struct.
struct GamepadReport {
uint8_t report_id;
uint16_t buttons;
int8_t left_x;
int8_t left_y;
int8_t right_x;
int8_t right_y;
} __attribute__((packed));
I'm trying to add support for a single extra button that should serve as the "home" button (think of the X on an Xbox controller). This, in theory, should be done by changing the lines containing 0x29, 0x10 and 0x95, 0x10 to 0x29, 0x11 and 0x95, 0x11 respectively. However, doing so breaks the connection with the custom controller.
I cannot for the life of me figure out why this is and it makes absolutely zero sense to me. Can someone with more experience or knowledge about HID descriptors give me a hand?
In case anyone stumbles upon this or has a similar issue. You can't create a report count field on hid descriptors with numbers not divisible by 8 unless you add padding bits.
The solution was straightforward after reviewing the comments on my question and looking at similar issues online.
My gamepad report struct could only hold 16 bits. Even if I had a correctly defined hid descriptor, this would've prevented it from working. I changed my struct to the following.
struct GamepadReport {
uint8_t report_id;
uint32_t buttons;
int8_t left_x;
int8_t left_y;
int8_t right_x;
int8_t right_y;
} __attribute__((packed));
Modify your hid descriptor to contain the padding bits to the next number divisible by 8 that fits within your struct types. In this case, I need to fill 32 bits and I have 17 buttons. 32 - 17 means I need to add 15 padding bits.
0x75, 0xF, // Report Size (15) - PADDING BITS
0x95, 0x01, // Report Count (1)
0x81, 0x03, // Input (Const,Var,Abs,No Wrap,Linear,Preferred State,No Null Position)
I am currently writing a code to write on an LCD screen pixel by pixel. The code works fine, however the speed at which the code is processed is incredibly slow. The goal is simply to write number on the LCD screen so I am using the "switch" function with a "for loop" to read each of the bit I will activate. I am wondering if someone could tell me a way to speed up my code...
int* switch_library_number_1(int num, int octet)
{
switch(num)
{
case 0 : ;
int number_0 [] = {0x80, 0x08,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xE0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x88,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xE0, 0x00, 0x00, 0x00, ...};
int * pNumber_0 = &number_0[octet];
return pNumber_0;
break;
case 1 : ;
int number_1 [] = {0x80, 0x08,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x88, ...};
int * pNumber_1 = &number_1[octet];
return pNumber_1;
break;
}
Then it goes up to nine like that, I don't think you need to seem all the cases. Plus even if I deleted most of them, I have 522 bytes by number. The rest of the code goes as fallow :
int main(void)
{
ADC_Initialization();
SPI_Initialization();
int nombre_octet = 522;
int premier_nombre;
int deuxieme_nombre;
while(1)
{
GPIOA->BSRRL = CS;
for(int i = 0; i < nombre_octet; i++)
{
write_spi(*switch_library_number_1(0, i));
}
GPIOA -> BSRRH = CS;
for(int i = 0; i < 100; i++)
{
}
GPIOA->BSRRL = CS;
for(int i = 0; i < nombre_octet; i++)
{
write_spi(*switch_library_number_2(1, i));
}
GPIOA -> BSRRH = CS;
}
}
Finally, here is the write_SPI function, but due to it's simplicity, I don't think that it is the problem.
void write_spi(char data)
{
SPI1->DR = data;
while (!(SPI1->SR & SPI_I2S_FLAG_TXE));
while (!(SPI1->SR & SPI_I2S_FLAG_RXNE));
while (SPI1->SR & SPI_I2S_FLAG_BSY);
}
Thanks in advance!
I quite like the way you split your code into three snippets. I can suggest improvements for each of them:
switch_library_number_1():
This could be just a 2D array, number[][], or if number_0, number_1... are not of the same length, it could be an array of pointers to these. There would need to be checks for valid num and offset. This might be a minor speed improvement.
Your number_0... arrays are currently on stack, and read-write. Make them const, so they won't use RAM.
Currently you are returning a pointer to memory location on stack - this doesn't normally work, if it does it's by luck and accident. You should not access stack data when you're out of scope (function) where it's been defined. static const would make this safe, as it wouldn't be on stack anymore.
main loop:
It's a bit odd to call switch_library_number_1/2 on each loop iteration. You know your data will just be in array. This could probably be replaced by write_spi(number[0][i]); if number array is properly set up. This should get you some speed improvement, as it very much simplifies data fetching.
You appear to have a busy loop. That's a tricky practice (I bet 100 is a guess, and note that compiler could optimise this loop away). If possibly use some library provided delay function or a timer to get precise delays. Is this an actual requirement of SPI slave?
write_spi(char data):
char should be unsigned char here. chars might be signed or unsigned, so when you're using them as bytes (not actual string characters), you should specify signedness.
You seem to wait for every byte transmission to finish, which is safe, but a bit slow. Normally this can be rewritten into a faster alternative of wait_for_SPI_ready_for_TX; SPI_TX, where you only wait before sending next byte. Note that you will also need to wait for byte to be transmitted fully before pulling CS back high again. This could be a big speed improvement.
Some other things to consider:
What's the actual SPI clock? There may be huge speed improvements if clock is increased.
How did you measure this to be "slow"? Does it point to slow parts of code (what are those then? If not obvious from C, what are they assembled to?)
Have you got an oscilloscope/logic analyser to look at actual signals on wire? This may provide useful data.
I had a similar problem with STM32F207 Series Cortex-M3 controller, when I observed the TX line through Oscillator, I saw that CHIP_SELECT disable was taking too much time to set in, after all the data has sent.I figured out it is related to flag controls So ı play with the control flags a little bit, Here how it worked out just fine for me;
static void SPI_Send(uint16_t len,uint8_t* data)
{
uint16_t i;
for(i = 0;i<len;i++)
{
SPI_I2S_SendData(SPI1,*(data+i));
while(!(SPI1->SR & SPI_SR_TXE));
}
while(SPI1->SR & SPI_SR_BSY);
CHIP_SEL_DISABLE;
}
I believe it is slow because you are also checking the 'Receive Buffer Not Empty' where you don't need to.
it seems this code make my display go crazy sometimes (but only sometimes). But when I remove dat=~dat; it seems to work fine.
why?
what I am trying to do here is just make the ascii letters be the oposite: so for example:
11001000 will be:
00110111
or
10101111 would be:
01010000
the reason for doing this is that i want to have one row (the active row) in the diplay window with black on white pixels instead of opostie like the rest of the display window.
Is there some other way I could do this? (invert the numbers)
FYI: I am programing in C. Atmel studio. atmega 4809, SSD1305z display, SPI-simular interface.
void displayinvertedString(char str[], uint8_t ypos,uint8_t xpos)
{
Set_Page_Address(ypos);
Set_Column_Address(xpos);
int len = strlen(str);
uint8_t dat;
int temp;
for (int e=0; e<len; e++)
{
dat = 0xff;
Write_Data(dat); //to get an extra space between the
// numbers/letters for
//making it easier to read the text on the display
temp = str[e];
temp=temp-0x20; // As the lookup table starts from Space(0x20)
for (int w=0; w<5; w++)
{
dat= OledFontTable[temp][w]; // Get the data to be displayed for LookUptable
dat =~ dat;
Write_Data(dat);
}
}
}
----------
static uint8_t OledFontTable[][FONT_SIZE]={
//static uint8_t OledFontTable[] = {
0x00, 0x00, 0x00, 0x00, 0x00, // space
0x00, 0x00, 0x2f, 0x00, 0x00, // !
0x00, 0x07, 0x00, 0x07, 0x00, // "
0x14, 0x7f, 0x14, 0x7f, 0x14, // #
0x24, 0x2a, 0x7f, 0x2a, 0x12, // $
0x23, 0x13, 0x08, 0x64, 0x62, // %
0x36, 0x49, 0x55, 0x22, 0x50, // &
ETC. Etc.
just more raw pixel data here. this part ends like this:
0x00, 0x00, 0xFF, 0x00, 0x00, // |
0x00, 0x82, 0x7C, 0x10, 0x00, // }
0x00, 0x06, 0x09, 0x09, 0x06 // ~ (Degrees)
};
void Write_Data(unsigned char Data)
{
PORTA.OUTCLR = PIN7_bm; //cs
PORTB.OUTSET = PIN2_bm; //dc
Write_Command(Data); //
}
void Write_Command(unsigned char data)
{
SPI0.DATA = data; // copy data to DATA register
while ((SPI0.INTFLAGS & SPI_RXCIF_bm) == 0) ; //wait
}
I have asked a bit about this before. but i thought i would look "cleaner" with a new tread since info was missing from the last one.
It turned out I needed to toggle the chip select (CS) so the clock did not get out of sync with time.
The clock sync drifted with time.
It was going crazy faster for the non inverted data for some reason. But with the normal data it happend after some time also.
Thank you for the answers.
I'm building a custom keyboard with stm32f103.
My first trial with standard 8 byte works pretty well:
0x05, 0x01, // Usage Page (Generic Desktop)
0x09, 0x06, // Usage (Keyboard)
0xA1, 0x01, // Collection (Application)
//Modifiers
0x05, 0x07, // Usage Page (Key Codes)
0x19, 0xe0, // Usage Minimum (224)
0x29, 0xe7, // Usage Maximum (231)
0x15, 0x00, // Logical Minimum (0)
0x25, 0x01, // Logical Maximum (1)
0x75, 0x01, // Report Size (1)
0x95, 0x08, // Report Count (8)
0x81, 0x02, // Input (Data, Variable, Absolute)
//Reserveds
0x95, 0x01, // Report Count (1)
0x75, 0x08, // Report Size (8)
0x81, 0x01, // Input (Constant) reserved byte(1)
//Regular Keypads
0x95, 0x06, // Report Count (normally 6)
0x75, 0x08, // Report Size (8)
0x26, 0xff, 0x00,
0x05, 0x07, // Usage Page (Key codes)
0x19, 0x00, // Usage Minimum (0)
0x29, 0xbc, // Usage Maximum (188)
0x81, 0x00, // Input (Data, Array) Key array(6 bytes)
0xC0 // End Collection (Application)
then I tried to make the report length longer to support more key press at the same time, so I change this
0x95, 0x06, // Report Count (normally 6)
to this
0x95, 0x30, // Report Count (normally 6)
and accordingly
struct HIDreport
{
int8_t mod;
int8_t reserv;
int8_t key[lenth];
};
struct HIDreport report;
but I found any of the key press do not work, what am I missing?
thanks
If your Interface defines the keyboard as a "BOOT keyboard", for example:
0x03, /*bInterfaceClass: HID*/
0x01, /*bInterfaceSubClass : 1=BOOT, 0=no boot*/
0x01, /*nInterfaceProtocol : 0=none, 1=keyboard, 2=mouse*/
...then I'm pretty sure the HID report descriptor will be ignored. The idea of a "BOOT keyboard" is that it uses a fixed size buffer (1 byte keyboard modifiers, 1 byte reserved, 6 bytes of keyboard usage indexes) so that it can be recognised during boot up (e.g. to modify CMOS settings) without having to implement a full USB stack in the BIOS.
Valid combinations of Class, Subclass, Protocol are as follows:
Class Subclass Protocol Meaning
3 0 0 Class=HID with no specific Subclass or Protocol:
Can have ANY size reports (not just 8-byte reports)
3 1 1 Class=HID, Subclass=BOOT device, Protocol=keyboard:
REQUIRES 8-byte reports in order for it to be recognised by BIOS when booting.
That is because the entire USB protocol cannot be implemented in BIOS, so
motherboard manufacturers have agreed to use a fixed 8-byte report during booting.
3 1 2 Class=HID, Subclass=BOOT device, Protocol=mouse
The above information is documented in Appendix E.3 "Interface Descriptor (Keyboard)"
of the "Device Class Definition for Human Interface Devices (HID) v1.11" document (HID1_11.pdf) from www.usb.org
Edit: The use case for a buffer size of more than 6 keys is questionable anyway because the buffer represents those keys that are simultaneously pressed at a particular point in time. It is unlikely that anyone would need to press more than 6 keys simultaneously.
Im using the following code to send a hex string to an NSStreamoutput:
uint8_t mirandaroutecommand[] = { 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x30, 0x14 };
NSData *data = [[NSData alloc] initWithBytes:mirandaroutecommand length:sizeof(mirandaroutecommand)];
[hextooloutputStream write:[data bytes] maxLength:[data length]];
This works great but the problem is I need these hex values to come from NSTextfields on the user interface. I've tried to convert the NSTextfield data to an integer but that didn't work. Is there a way to use data from several NSTextfields to build an array of integers using hex values?
I've tried to find an existing topic covering this but I've had no luck.
The NSString methods integerValue and intValue assume a decimal representation and ignore and trailing gumph - so apply them to #"0x01" and they see 0 followed by some gumph (x01) which they ignore.
To read a hex value use NSScanner's scanHexInt: (or scanHexLongLong:):
unsigned scannedHexValue;
if ([[NSScanner scannerWithString:textFieldContents] scanHexInt:&scannedHexValue])
{
// value found
...
}
NSScanner is more powerful than just parsing a single value, so with the appropriate method calls you can scan a comma separated list of hex values if you wish.