Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a doubt regarding time division multipluxing.
In GSM we are using time division multiplexing.
In time division multiplxing, there is timeslots foreach signel. But while using GSM mobiles, we are getting a continues flow of data, but actually we are not transmitting it continuesly rite??
How do we get continues signal even if transmission is done using TDM.
The simple answer is that you're not receiving a continuous flow of data. What you are receiving is short bursts of data close enough together that they seem to form a continuous stream.
In case you care, the specific numbers for GSM are that it starts with 4.615 ms frames, each of which is divided into 8 timeslots of .577 ms. So, a particular mobile handset receives data for .577 ms, then waits for ~4 ms, then receives data for another .577 ms, and so on. There's a delay of 3 time slots between receiving and transmitting, so it receives data, then ~1.8 ms later, it gets to transmit for .577 ms.
The timing is close enough together that even if (for example) the signal gets weak and/or there's interference for a few ms, and a particular hand-set misses receiving data for one time slot won't necessarily be audible. When the signal is lost for about 20 ms, most people can start to perceive it as actual signal loss. Losses shorter than that will result in lower sound fidelity, but not necessarily as actual loss of signal.
It's also worth noting that most of the newer (3G, 4G, LTE) systems work entirely differently, being based on CDMA instead of TDMA.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am programming a Texas Instruments TMS320F28335 Digital Signal Controller (DSC) and I am writing the communication with a resolver model AD2S1205 (datasheet: https://www.analog.com/media/en/technical-documentation/data-sheets/AD2S1205.pdf). I have to implement the “Supply sequencing and reset” procedure and the procedure for reading and sending the speed and position values through the SPI serial interface back to the DSC. I'm new to firmware and there are several things I don't know how to do:
In the resolver datasheet (page 16) you are asked to wait for the supply voltage Vdd to reach its operating range before moving the reset signal. How can I know when this happened?
To ask the resolver to read and transmit the position and speed information, I must observe the time diagram (page 15), e.g. before putting /RD = 0 I have to wait for /RDVEL to remain stable for t4 = 5 ns. What should I insert in the code before the instruction that lowers RD to make sure that 5 ns have passed? I can pass 0,005 to the DELAY_US(A) function available on the DSC (which delays for A microseconds) but I don’t know if it will actually work and if this is the right way to go to observe device timing diagrams.
In the “/RD input” section (page 14) it is specified that the high-to-low transition of /RD must occur when the clock is high. How can I be sure that the line of code that lowers /RD is running when the clock is high?
Connect chip Vdd to the ADC port via the divider. Reset the chip when Vdd is correct.
Your uC is 150MHz. The clock period is 6.67ns which is larger than 4ns required. Whatever you do you cant change the pin faster. Problem does not exist for you.
Connect CLKIN to the input pin. Poll it. Change /RD when clock high
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to microprocessor programing and currently have a RGB sensor which reads an RGB value and increments a variable by an arbitrary number. I want the sensor to turn off for 0.3 seconds when I reach a certain value. Is there a way to do this or will I have to figure out a different way to throw out all the values the RGB sensor receives during that 0.3 second time span? I am writing in C.
Note: The sensor I am currently using is a TCS230.
According to the datasheet pin #3 is Output Enable ('OE, active low). So if you drive this pin high it should cut off the chip's output.
Or more to your question, it looks like if you drive pins S0 and S1 both low, it will place the chip in a "Power Down" state.
Whichever option you choose depends on what's more important. Do you want quickest reaction time, or do you want to conserve power? If you want the quickest reaction time, use 'OE. There is a typical 100ns delay between asserting this signal and the chip responding. The downside is the chip is still running during this whole time. If you choose the Power Down state, then you will save energy vs the Output Enable option, but the photodiodes have a typical 100 microsecond "recovery from power down" delay. Obviously that's a factor of 1000, and if you're doing time-critical work, probably not the best option.
Keep in mind, I have never used this chip in my life, just basing my answer a quick read of the datasheet.
I am benchmarking redis recently and here is the result I got:
ubuntu 13.10 x86_64 with kernel version 3.11,
Intel® Core™ i5 CPU M 430 # 2.27GHz × 4
8GB Memory
So given the same load, multiple connection to redis could perform 8x faster than single connection. I am not considering pipelining here, and I already tried many optimization approach in my test. (using taskset to make redis running in single core, using unix domain socket)
Two questions:
Why multiple connection to redis could perform this faster than single connection?
Is there any other way(other than pipeline) to improve performance under single connection?
I did some performance testing for this problem these days and got some results here.
The key is to figure out where the extra latency comes from for the single connection case. My first hypophsis is they come from epoll. To find out that, I use systemtap and the script to determine epoll latency. The result (above: 1 connection result, below: 10 connection result. Sorry, the unit should be nanoseconds in the pic):
From the result, you can see that the avg time staying in epoll is almost same, 8655 nanoseconds vs 10254 nanoseconds. However, there is a significant difference of total number. For 10 connection we call epoll wait 444528 times but we will call it 2000054 times in single connection case, which is 4x and that's what lead to this additional time usage.
Next question would be why we call epoll so less time during multipe connection. After exploring a little bit with redis source code, I found the reason. Everytime epoll returns, it will return the number of events it gonna handle. The presudo code is like (hiding most of the details):
fds = epoll_wait(fds_we_are_monitoring);
for fd in fds:
handle_event_happening_in(fd);
The return value fds is a collection of all the events in which IO is happening, for example, read input from socket. For single connection benchmark, fds_we_are_monitoring is 1, since there are only one connection, then every time #fd would be 1. But for 10 connection case, fds could return any number less than 10, and then handling the events together in the for loop. The more events we get from epoll return once at a time, the faster we can get. Because the total number of requests is fixed, in this case, 1M set requests.
To verify that, I use systemtap to draw the distribution of return values of the function. aeProcessEvents, which returns the number of events it handled.
We can see the avg: 1 in single connection case vs 7 in 10 connection case. Which proves our hypothesis that the more number of events epoll returns once, the faster we can handle the requests, until it become CPU bound.
I think I got the answer for the first question: Why multiple connection to redis could perform this faster than single connection. However, I am still wondering if there is any other way(other than pipeline) to improve performance under single connection. I would appreciate if anyone else could share some thinking about this.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am currently working an atmel micro controller, the EVK1104s, which house the UC32 Data Sheet. We have actually planted this chip on a custom PCB and are inthe process of writting more firmware.
Currently, I need to tell the ADC on the Micro Controller Unit(MCU) to sample at (8k samples / second). In reality this is for sampling a microphone. Either way, the documentation is quite unclear and I was looking for some clarification.
I know that to change the sampling rate i need to change what is called the Mode Register, the register used to configure the ADC for use (pg 799 in the link above). This is the register which allows me to change the sample/hold time/ start up time, and the ADCclock.
EX(from pg 799):
Sample & Hold Time = (SHTIM+3) / ADCClock
ADCClock = CLK_ADC / ( (PRESCAL+1) * 2 )
From what I gather, i will only need to change the PRESCAL to make the ADCClock operate at 8Khz. The problem is that PRESCAL is limited to 8 bits of resolution.
For example, if the controller is set at 12Mhz/x = 8Khz then x would need to be 1500. Because x is limited to 8 bits as i said before this would appear to be impossible, because the max is 255.
I feel that I am doing something wrong here, or not understanding what the datasheet wants me to. Can anyone confirm what I have just talked about or help direct me?
You are confused about the sampling rate and the ADC rate.
The registers you refer to in the manual only control the taking of one sample. The registers allow you to control how long to sample the voltage for. This may make a difference to you depending on the circuitry involved. That is, you don't want to take the sample too fast for your circuit. (I didn't look closely at the datasheet, but some microcontrollers take several samples and average them. This behaviour is controlled by registers, too.)
But the 8 kHz sampling rate refers to how often you want to sample. That is, this is the frequency you want to trigger the individual samples. The registers you mention don't address this. You need to use a clock and an interrupt handler to move the data out of the register into storage somewhere or do something with it, and then trigger the next sample. There is also an interrupt handler that can deal with the sample as soon as it is ready. In that scheme, you use to handlers: one to trigger the samples; another to deal with the samples when they are ready.
Edit:
To explain more why you don't want such a slow ADC rate, consider how the ADC generates its data. It samples for the first bit, waits a cycle, samples for the second bit, and so on for 10 cycles. The accuracy of the result depends on the signal staying stable over all these samples. If the signal is changing, then the bits of this number are meaningless. You need to set the prescalar and ADC clock fast enough so the signal does not change, but slow enough for the signal to settle.
So yes, you want to use a clock and interrupt handler to read the data then trigger the next reading. The ADC runs independently of the processor, and will be ready by the time the interrupt runs again. (The first reading will be garbage, but you can set a flag or something to guard against that.)
volatile int running = false
Handler()
if(running) do something with data
running = true
trigger ADC
output compare += 1/8000 s
I need to validate and characterize CAN bus traffic for our product (call it the Unit Under Test, UUT). I have a machine that sends a specified number of can frames to our product. Our product is running a Linux based custom kernel. The CAN frames are pre-built in software on the sender machine using a specific algorithm. The UUT uses the algorithm to verify the received frames.
Also, and here is where my questions lie, I am trying to calculate some timing data in the UUT software. So I basically do a read loop as fast as possible. I have a pre-allocated buffer to store the frames, so I just call read and increment the pointer to the buffer:
clock_gettime(clocK_PROCESS_CPUTIME_ID, timespec_start_ptr);
while ((frames_left--) > 0)
read(can_sock_fd, frame_mem_ptr++, sizeof(struct can_frame));
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, timespec_stop_ptr);
My question has to do with the times I get when I calculate the difference in these two timespecs (the calculation I use is correct I have verified it, it is GNUs algorithm).
Also, running the program under the time utility agrees with my times. For example, my program is called tcan, so I might run
[prompt]$ time ./tcan can1 -nf 10000
to run on can1 socket with 10000 frames. (This is FlexCAN, socket based interface, BTW)
Then, I use the time difference to calculate the data transfer speed I obtained. I received num_frames in the time span, so I calculate the frames/sec and the bits/sec
I am getting bus speeds that are 10 times the CAN bus speed of 250000 bits per sec. How can this be? I only get 2.5% CPU utilization according to both my program and the time program (and the top utility as well).
Are the values I am calculating meaningful? Is there something better I could do? I am assuming that since time reports real times that are much greater than user+sys, there must be some time-accounting lost somewhere. Another possibility is that maybe it's correct, I don't know, it's puzzling.
This is kind of a long shot, but what if read() is returning early because otherwise it would have to wait for incoming data? The fastest data to read is none at all :)
It would mess up the timings, but have you tried doing this loop whilst error checking? Or implement the loop via a recv() which should block unless you have asked it not to?
Hopefully this helps.