PIC16F877A timer1 interrupt time is not as expected - c

Implemented interrupt function on TIMER1 on PIC16F877A MCU on PIC-DIP40 development board. Configured the timer Prescaler to 1 and auto preload value to 55536 so that the interrupt time is 0.01s. Using a counter of 100 to count 1s interval. The Fosc is 4Mhz. So my calculation is :
interrupt time = (4 / Fosc) * (65536 - 55536) = (4/4000000) * (65536 - 55536) = 0.01 s
And used a counter of 100 to generate a 1s interval.
Currently, I have no oscilloscope to test the actual 1s interval so, I am blinking an LED (LED2) on the timer interrupt and another LED (LED1) on the same time interval 1s using __delay_ms(1000); function.
So as expected the two LEDs will blink synchronously (Turn ON and OFF at the same Time). But for some first iterations, they blink synchronously. After some iterations, there is a clear difference in time between their blinking time (Turning ON and OFF time). After several minutes the difference is almost 1s. So the timer interrupt is not working as expected.
So is my calculation wrong for interrupt time or I am missing something in the timer1 configuration?
The overall goal is to generate a 1s time interval and test the validity without using an oscilloscope.
Here is my code :
// CONFIG
#pragma config FOSC = HS // Oscillator Selection bits (HS oscillator)
#pragma config WDTE = OFF // Watchdog Timer Enable bit (WDT disabled)
#pragma config PWRTE = OFF // Power-up Timer Enable bit (PWRT disabled)
#pragma config BOREN = OFF // Brown-out Reset Enable bit (BOR disabled)
#pragma config LVP = OFF // Low-Voltage (Single-Supply) In-Circuit Serial Programming Enable bit (RB3 is digital I/O, HV on MCLR must be used for programming)
#pragma config CPD = OFF // Data EEPROM Memory Code Protection bit (Data EEPROM code protection off)
#pragma config WRT = OFF // Flash Program Memory Write Enable bits (Write protection off; all program memory may be written to by EECON control)
#pragma config CP = OFF // Flash Program Memory Code Protection bit (Code protection off)
#include <xc.h>
#include <pic16f877a.h>
#define _XTAL_FREQ 4000000
#define LED1_ON PORTDbits.RD7 = 0
#define LED1_OFF PORTDbits.RD7 = 1
#define LED2_ON PORTDbits.RD6 = 0
#define LED2_OFF PORTDbits.RD6 = 1
#define LED2_TOGGLE PORTDbits.RD6 = ~PORTDbits.RD6
uint16_t preloadValue = 55536 ;
uint16_t counter = 0 ;
uint16_t secCounter1 = 100 ;
void io_config() {
TRISD &= ~((1 << _PORTD_RD7_POSITION) | (1 << _PORTD_RD6_POSITION)) ; //RD7 and RD6 are output LEDs
}
void timer1_init(){
TMR1 = preloadValue ; //loading the preload value
T1CON &= ~((1 << _T1CON_T1CKPS1_POSN) | (1 << _T1CON_T1CKPS0_POSN) | (1 << _T1CON_TMR1CS_POSN)) ; //prescalar is 1 clock is Fosc
T1CONbits.TMR1ON = 1 ; //timer 1 is ON
LED2_ON ;
}
void interrupt_en_configure(){
INTCON |= (1 << _INTCON_GIE_POSITION) | (1 << _INTCON_PEIE_POSITION) ; //global and peripheral interrupt on
PIE1 |= _PIE1_TMR1IE_MASK ; //timer 1 interrupt enable
TMR1IF = 0 ; //clearing interupt flag
}
void __interrupt() ISR(){
if(TMR1IF){
counter ++ ;
if (counter == secCounter1){
counter = 0 ;
LED2_TOGGLE ;
}
TMR1 = preloadValue ;
TMR1IF = 0 ;
}
}
void main(void) {
io_config();
interrupt_en_configure() ;
timer1_init() ;
while (1) {
LED1_ON ;
__delay_ms(1000);
LED1_OFF ;
__delay_ms(1000);
}
}

You should not expect them to operate synchronously for the following reasons:
First you do not know how __delay_ms() is implemented or any "promises" of precision it may make - it is certainly not using TIMER1, because you are controlling that. In fact the documentation gives some implementation details, and you really cannot expect precision.
Secondly, even if __delay_ms() were both accurate and synchronous, you are invoking it in a loop with the software overhead of the loop, function call and whatever you are doing to toggle the LED. That is a few cycles on every iteration that do not affect the interrupt interval which is locked to the hardware, and independent of the software timing.
The issue of precision of __delay_ms() is in fact addressed in this Microchip support article where it starts:
If an accurate delay is required, or if there are other tasks that can be performed during the delay, then using a timer to generate an interrupt is the best way to proceed.
In this case you should trust your code over the library provided delay which is intentionally crude (because it does not use up a valuable H/W timer resource).

__delay_ms() delays by running an empty loop, but it commonly cannot be exact. You would need to look into the actual machine code that is run to calculate the real delay. BTW, this is not rocket science and a great learning task. (Been there, done that.)
Now the rest of your loop (LED switching, looping) adds to this. Therefore, your pure software driven blinker is not exact.
However, your interrupt driven blinker is not, too. You reset the timer at the end of the ISR, after several clock cycles have passed. You need to take this into account, and don't forget the interrupt latency. Even worse, depending on the conditional statement, the reset happens at different times after the timer overflow.
Producing exact timing is difficult, especially with such a simple device.
The solution is to avoid software at all for the reset of the timer. Please read chapter 8 of the data sheet and use the capture/compare/PWM module to reset the timer on the appropriate value.
The worst thing that could still happen is some jitter, just because the ISR might have different latencies. But the timer runs as exactly as your system's crystal. In average your LED will blink correctly.
Anyway, if your timing requirements are not that hard, consider to live with some inaccuracy. Then use the most simple solution you like best.

Related

Attiny85, microsecond timer implemented on timer0 does not count the correct time

There is Attiny85, with an internal clock source at 8 MHz.
I am trying to implement a microsecond timer based on the hardware timer timer0.
What is my logic:
Since the clock frequency is 8 MHz and the prescaler is off, the time of one clock cycle will be about 0.1us (1/8000000).
Initially, the timer overflows and causes interruptions when passing 0 ... 255, it takes more than 0.1us and is inconvenient for calculating 1μs.
To solve this, I thought about the option to change the initial value of the timer instead of 0 to 245. It turns out that in order to get to the interruption, you need to go through 10 clock cycles, which takes about 1us in time.
I load this code, but the Attiny LED obviously does not switch for about 5 seconds, although the code indicates 1 second (1000000us).
Code:
#include <avr/io.h>
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/interrupt.h>
// Timer0 init
void timer0_Init() {
cli();
//SREG &= ~(1 << 7);
// Enable interrupt for timer0 overflow
TIMSK |= (1 << 1);
// Enabled timer0 (not prescaler) - CS02..CS00 = 001
TCCR0B = 0;
TCCR0B |= (1 << 0);
// Clear timer0 counter
TCNT0 = 245;
sei();
//SREG |= (1 << 7);
}
// timer0 overflow interrupt
// 1us interval logic:
// MCU frequency = 8mHz (8000000Hz), not prescaler
// 1 tick = 1/8000000 = 100ns = 0.1us, counter up++ after 1 tick (0.1us)
// 1us timer = 10 tick's => 245..255
static unsigned long microsecondsTimer;
ISR(TIMER0_OVF_vect) {
microsecondsTimer++;
TCNT0 = 245;
}
// Millis
/*unsigned long timerMillis() {
return microsecondsTimer / 1000;
}*/
void ledBlink() {
static unsigned long blinkTimer;
static int ledState;
// 10000us = 0.01s
// 1000000us = 1s
if(microsecondsTimer - blinkTimer >= 1000000) {
if(!ledState) {
PORTB |= (1 << 3); // HIGH
} else {
PORTB &= ~(1 << 3); // LOW
}
ledState = !ledState;
blinkTimer = microsecondsTimer;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
while (1)
{
ledBlink();
}
}
Attiny85 Datasheet
What could be the mistake? I have not yet learned how to work with fuses, so I initially loaded the fuses at 8 MHz through the Arduino IDE, and after that I already downloaded the main code (without changing the fuses) through AVRDUDE and Atmel Studio.
And another question, should I check the maximum value when updating my microsecond counter? I know that in Arduino, the micro and millis counters are reset when they reach the maximum value. For example, if I do not clear the TimerMicrosecond variables variable and it exceeds the size of the unsigned long, will it crash?
As pointed out by #ReAI, your ISR does not have enough time to run. Your ISR will take more than 1 microsecond to execute and return, so you always are missing interrupts.
There are other problems here too. For example, your microsecondsTimer variable is accessed in both the ISR and the foreground and is a long. long variables are 4 bytes wide and so are not updated atomically. It is possible, for example, that your foreground could start reading the value for microsecondsTimer and then in the middle of the read, the ISR could update some of the unread bytes, and then when the foreground starts again it will end up with a mangled value. Also, you should avoid messing with the count register since updating it can miss ticks unless you are very careful.
So how could you implement a working uSec timer? Firstly you'd like to call the ISR as infrequently as possible, so maybe pick the largest prescaller you can get get the resolution that you want and only ISR on overflow. In the case of the ATTINY85 Timer0, you can pick /8 prescaller which gets you one tick of the timer per microsecond with an 8Mhz system clock. Now your ISR only runs once every 256 microseconds and when it runs, it need only increment a "microseconds * 256" counter in each call.
Now to read the current microseconds in the foreground, you can get the number of microseconds mod 256 by directly reading the count register, and then read the "microseconds * 256" counter and multiply this by 256 and add that the counter and you'll have the full count. Note that you will need take special precautions to make sure your reads are atomic. You can do this either by carefully turning off the interrupts, quickly reading the values, and then turning the interrupts back on (save all the math for when interrupts are back on), or looping on the read values to make sure you get two full reads in a row that are the same (time means that have not updated while you were reading them).
Note that you can check out the source code to Arduino timer ISR for some insights, but note that theirs is more complicated because it can handle a wide range of tick speeds whereas you are able to keep things simple by specifically picking a 1us period.
why you didn't use pre-scaler ?!
your code need a relly relly big delay intervall(1 sec it's huge time according to cpu speed) .... so it's not wisdom choose to interrupt microcontroller every 1 us !!.. so it will be great if we could slow down your microcontroller clock and make interrupt for example every 1 ms
calculation
the microcontroller clock speed is 8 mega Hz so if we chose the preScaller to 64 then the timer clock will be 8MHz/64=125 KHz so that mean each tik (timer clock) time will be 1/125KHZ=8 us
so if we like to have inturrpt every 1ms then we need 125 tik
modify code
try this code it's more clear to understand
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
volatile int millSec;
void timer0_Init();
void toggleLed();
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
millSec = 0; // init the millsecond
sei(); // set Global Interrupt Enable
while (1)
{
if(millSec >= 1000){
// this block of code will run every 1 sec
millSec =0; // start count for the new sec
toggleLed(); // just toggle the led state
}
// Do other backGround jobs
}
}
//#####Helper functions###########
void timer0_Init() {
// Clear timer0 counter
TCNT0 = 130; //255-125=130
// Enable interrupt for timer0 overflow
TIMSK = (1 << 1);
// set prescaler to 64 and start the timer
TCCR0B = (1<<CS00)|(1<<CS01);
}
void toggleLed(){
PORTB ^= (1 << 3); // toggle led output
}
ISR(TIMER0_OVF_vect) {
// this interrupt will happen every 1 ms
millSec++;
// Clear timer0 counter
TCNT0 = 130;
}
Sorry, i am late but i have got some suggestions. If you calculate the Timer0 with prescaler 1, the timer is counting up every 125ns. It is not possible to reach 1 us without a small divergence. But if you use prescaler 8 you reach exactly 1 us. I actually do not have your hardware but give this a try:
#ifndef F_CPU
#define F_CPU 8000000UL
#else
#error "F_CPU already defined"
#endif
#include <avr/io.h>
#include <avr/interrupt.h>
volatile unsigned int microsecondsTimer;
// Interrupt for Timer0 Compare Match A
ISR(TIMER0_COMPA_vect)
{
microsecondsTimer++;
}
// Timer0 init
void timer0_Init()
{
// Timer0:
// - Mode: CTC
// - Prescaler: /8
TCCR0A = (1<<WGM01);
TCCR0B = (1<<CS01);
OCR0A = 1;
TIMSK = (1<<OCIE0A)
sei();
}
void ledBlink() {
static unsigned int blinkTimer;
if(microsecondsTimer >= 1000)
{
microsecondsTimer = 0;
blinkTimer++;
}
if(blinkTimer >= 1000)
{
PORTB ^= (1<<PINB3);
blinkTimer = 0;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << PINB3);
timer0_Init();
while (1)
{
ledBlink();
}
}
If you are using internal clock of attiny it may be divied by 8. To disable the clock division you have to disable the prescaler within 4 clock cycles (atomic operation):
int main(void)
{
// Reset clock prescaling
CLKPR = (1<<CLKPR);
CLKPR = 0x00;
// ...
Please try this solution an give feedback if it is working. Maybe you can verify it with an oscilloscope...
Notice that operations with unsigned long needs more than 1 clock cycle to handle on an 8 bit microcontroller. Maybe it would be better to use unsigned int or unsigned char. The main loop also should not contain lots if instructions. Otherwise error correction of microsecond timer has to be implemented.

Erroneous Pulsetrain Timing

I am having some trouble with a timer on my Arduino atmega328p-pu with 16MHz clock.
I have a really simply program with only one timer, two ISRs, and one pin.
The program does the following:
Iterates through the bits of 'sequence' and sets pin4 high or low respectively. However it doesnt set the bit high for the entire period, only 1/12 of it. What you see below is a single timer that counts up from 0 to 340. There is ISRB at 28, and then ISRA happens at 340, then it loops (that is what CTC mode does, loops after ISRA). ISRB always turns off the pin, and ISRA handles whether or not the pin should be high.
Now then the problem. All the timing works for each bit, but for some reason the loopover event causes the pulse spacing to SHORTEN. Yes shorten, not widen (which if anything was what I would expect because of extra clock cycles for executing the loop event).
it makes waveforms that look like this.
_|_|_|_|_ _ _ _ _|_|_|_||_|_|_|_ _ _ _
You can see that the problem resides in the junction between two packets, but the rest of the timing is good. I cant seem to track down why.
#include <stdint.h>
uint32_t sequence =0b111100001111; // example data sequence
uint8_t packetlength = 12;
uint8_t index = 0;
void setup(){
DDRD = 0xFF; // all port D as input
bitSet(DDRD, 4); // board pin 4 output
bitSet(PORTD, 4); // start high
// initialize timer1
TCCR1A = 0; // zeros timer control register
TCCR1B = 0;
TCNT1 = 0; // sets timer counter to 0
OCR1A = 340; // compare match register 340*62.5ns = 21.25us
OCR1B = 28; // 28*62.5ns = 1.75us
TIMSK1 = 0;
TCCR1B |= (1 << WGM12); // CTC mode
TCCR1B |= (1 << CS10); // CS10 no prescaler (use CS12 for 256 prescaler for testing)
TIMSK1 |= (1 << OCIE1A); // enable timer compare A interrupt
TIMSK1 |= (1 << OCIE1B); // enable timer compare B interrupt
}
ISR(TIMER1_COMPA_vect){ // controls bit repeat rate
if (bitRead(sequence,index) == 1){
bitSet(PORTD, 4); //set high
}
index ++;
if (index == packetlength){ //loop over when end reached.
index = 0;
}
}
ISR(TIMER1_COMPB_vect){ // controls duty cycle
bitClear(PORTD, 4); // set low
}
void loop(){
//nothing
}
Edit: April 5. Scope photos demonstrating inter pulsetrain period shortening.
The important measurement value is BX-AX
Normal. 340 + 6 calculation clock cycles (best estimate from scope)
Bad. Timer only counting 284 cycles before interrupt is firing.
Also Bad, but not a huge problem. This pulse is far to wide to be reasonably explained by the clock cycles needed set the bit low. It appears to take 17, I would expect 3.
I do not see why you should expect precise timing at the bit output. Interrupts begin after a delay once requested which will vary depending on instruction execution time for each instruction being run in whatever is being interrupted. I suspect (without seeing evidence in your report of the problem) that the variation you see is identical to instruction execution time variation.
If you want precise hardware output timing, you must either use never-interrupted programmed I/O or use the various flip-bit-upon-timer-compare features of the uP's hardware peripheral set. The ISR can be used to set things up for the next compare, but not to directly flip output bits.
Once you've figured out how to setup the action to be performed by the hardware upon comparitor matches, it will be simpler to do it all in a single ISR. That service routine can arrange for both the conditional bit set and the following unconditional bit clear. You probably want the ISR to run during the lengthier part of the cycle so that latency in the actual running of your [a] ISR code does not cause the setup to be too late.
[a. In addition to your ISR code, the programming environment is causing some context save a restore to wrap what you wrote. This can add execution cycles that might not be expected. Auto-generated context save/restore often is extravagant about tucking away state so that naive programmers are not puzzled by strange foreground-background interactions. ]

How to get millisecond resolution from DS3231 RTC

How to get accurate milliseconds?
I need to calculate the delay of sending data from Arduino A to Arduino B. I tried to use DS3231 but I cannot get milliseconds. What should I do to get accurate milliseconds from DS3231?
The comment above is correct, but using millis() when you have a dedicated realtime clock makes no sense. I'll provide you with better instructions.
First thing in any hardware interfacing project is a close reading of the datasheet. The DS3231 datasheeet reveals that there are five possible frequencies of sub-second outputs (see page 13):
32 KHz
1 KHz
1.024 KHz
4.096 KHz
8.192 KHz
These last four options are achieved by various combinations of the RS1 and RS2 control bits.
So, for example, to get exact milliseconds, you'd target option 2, 1KHz. You set RS1 = 0 and RS2 = 0 (see page 13 of the datasheet you provided) and INTCN = 0 (page 9). Then you'd need an ISR to capture interrupts from the !INT/SQW pin of the device to a digital input pin on your Arduino.
volatile uint16_t milliseconds; // volatile important here since we're changing this variable inside an interrupt service routine:
ISR(INT0_vect) // or whatever pin/interrupt you choose
{
++milliseconds;
if(milliseconds == 999) // roll over to zero
milliseconds = 0;
}
OR:
const int RTCpin = 3; // use any digital pin you need.
void setup()
{
pinmode(RTCpin, INPUT);
// Global Enable INT0 interrupt
GICR |= ( 1 < < INT0);
// Signal change triggers interrupt
MCUCR |= ( 1 << ISC00);
MCUCR |= ( 0 << ISC01);
}
If these commands in setup() don't work on your Arduino, google 'Arduino external interrupt INT0'. I've shown you two ways, one with Arduino code and one in C.
Once you have this ISR working and pin3 of the DS3231 connected to a digital input pin of your choosing, that pin will be activated at 1KHz, or every millisecond. Perfect!
// down in main program now you have access to milliseconds, you might want to start off by setting:
// When 1-second RTC changes seconds:
milliseconds = 0; // So you can measure milliseconds since last second.
That's all there is to it. All you need to learn now is how to set the command register using I2C commands and you're all set.
The C code example gains 1ms every second. Should be:
{
if (milliseconds == 999) // roll over to zero
milliseconds = 0;
else
++milliseconds;
}

Generating 1sec Time Delay using Timer on Arduino Uno with ATmega328P (C Language)

Hardware : Arduino Uno with ATmega328P
Software : Atmel Studio 6.2.1153, Arduino 1.0.6
Calculating the cycles needed for 1s
Clock Frequency of ATmega328P = 16M Hz
Clock Frequency with Prescaler CLK/1024 = 16M / 1024 = 15625 Hz
Clock Period with Prescaler CLK/1024 = (15625)^-1 = 6.4*10^-5s
Cycles of 1s = 1 / 6.4*10^-5 = 15625 cycles
Cycle needed for 1s = 15625 - 1 = 15624 cycles = 0x3D08
My Codes
OCR1AH = 0x3D; //Load higher byte of 15624 into output compare register
OCR1AL = 0x08; //Load lower byte of 15624 into output compare register
TCCR1A = 0b00000000;
TCCR1B = 0b00001101; //Turn on CTC mode and prescaler of CLK/1024
while((TIFR1 & (1<<OCF1A)) == 0); //If OCF1A is set (TCNT1 = OCR1A), break
TCCR1A = 0;
TCCR1B = 0; //Stop the timer
TIFR1 &= ~(1<<OCF1A); //Clear OCF1A for the next time delay
When I click "Start Debugging and Break" and "step over" the above codes as a function. It always show me "running" without a stop. Why? How to solve it?
Thank you for your help.
Your configuration looks OK.
When You pause execution of Your code in debugger, peripherals (independent of CPU) are not stopped. Some architectures/microcontrollers have additional hardware registers to stop peripherals (such a DMA or timers) when execution of code is stopped by debugger. Anyway, AVR does not.
If You are running Your code in simulation, You should be able to see all registers set by instructions, step-by-step. I advise You to turn off code optimizations for debugging purposes.
For debugging code in hardware (in case of AVR architecture) you need additional debugger. Debugging provided by Arduino use only software running over Your code in MPU and in some cases You cant rely on it.
Anyway, your code looks right. Only mistake: write logic 1 to TIFR1 to clear bit.
You should run Your code in loop to check if timer is working:
OCR1AH = 0x3D; //Load higher byte of 15624 into output compare register
OCR1AL = 0x08; //Load lower byte of 15624 into output compare register
TCCR1A = 0b00000000;
TCCR1B = 0b00001101; //Turn on CTC mode and prescaler of CLK/1024
while(1)
{
while((TIFR1 & (1<<OCF1A)) == 0); //If OCF1A is set (TCNT1 = OCR1A), break
//Blink LED here
TIFR1 = (1<<OCF1A); //Writing logic 1 to that register clears it
}
Of course if You dont want to run code in loop, just remove it. That code is just for testing purposes.
Edit:
Take look at Atmega328 datasheet: http://www.atmel.com/Images/doc8161.pdf, pages 139-140, TIFR1 register, bit 1 – OCF1A:
OCF1A is cleared by hardware when executing the corresponding interrupt. OCF1A can also be cleared by writing a logic one to that bit.
Some bits hardware registers (usually, which only can be cleared by user, never set by user) can be cleared by writing 1. Hardware connected with register set 0 to bit value when you write 1 to it. Writing 0 is ignored and not affect register/bit value. That prevents from setting bit from software, when bit may be set only with hardware (that case - timer). Think about it - there is no sense setting output compare register from code. Other actions (reading value, clearing bit) have sense and are allowed.
There are also some registers which can only be written, and not read (ie read always return same value).
When working with hardware registers remember to always check in datasheet how to set/clear bits.

Accuracy of Timer1 as real time clock with PIC Interrupts on 16F*

I'm using C with the BoostC compiler. I'm worried how accurate my code is. The configuration below ticks at more or less 1Hz, (tested with an LED to the naked eye). (It uses an external watch crystal 32kHz for Timer1 on the 16f74).
I'm hoping someone can tell me...
Does my code below need any adjustments? What would be the easiest way of measuring the accuracy to the nearest CPU clock period? Will I need to get dirty with assembly to reliably ensure accuracy of the 1Hz signal?
I'm hoping the time taken to execute the timer handler (and others) doesn't even come into the picture, since the timer is always counting. So long as the handlers never take longer to execute than 1/32kHz seconds, will the 1Hz signal have essentially the accuracy of the 32kHz Crystal?
Thanks
#define T1H_DEFAULT 0x80
#define T1L_DEFAULT 0
volatile char T1H = T1H_DEFAULT;
volatile char T1L = T1L_DEFAULT;
void main(void){
// setup
tmr1h = T1H;
tmr1l = T1L;
t1con = 0b00001111; // — — T1CKPS1 T1CKPS0 T1OSCEN NOT_T1SYNC TMR1CS TMR1ON
// ...
// do nothing repeatedly while no interrupt
while(1){}
}
interrupt(void) {
// Handle Timer1
if (test_bit(pir1, TMR1IF) & test_bit(pie1, TMR1IE)){
// reset timer's 2x8 bit value
tmr1h = T1H;
tmr1l = T1L;
// do things triggered by this time tick
//reset T1 interrupt flag
clear_bit(pir1, TMR1IF);
} else
... handle other interrupts
}
I can see some improvements...
Your timer initiation inside interrupt isn't accurate.
When you set the timer counter in interrupt...
tmr1h = T1H;
tmr1l = T1L;
... then you override the current value what isn't good for accuracy.
... just use:
tmr1h = T1H; //tmr1h must be still 0!
Or even better, just set the 7th bit of tmr1h register.
The compiler must compile this order to single asm instruction like...
bsf tmr1h, 7
...to avoid losing data in tmr1 register. Because if this is made with more than one instructions the hardware can increment the counter value between execution of: read-modify-write.

Resources