We see in all today electronic devices like mobile a Visual battery charging indicator,that a graphical Container composed of bars that increases one by one when the battery is charged for long, and decreases one by one when the mobile is used for long time.
I see the same thing laptop in every GUI operating system like windows and Linux.
I am not sure whether i am posting in the right place, because this requires a System Programmer and Electrical Engineer.
A Visual view of my Question is here:
http://gickr.com/results4/anim_eaccb534-1b58-ec74-697d-cd082c367a25.gif
I am thinking from long long ago , under what logic this works?
How the Program is managed to Monitor the battery.
I made a simple logic based on Amps-hour, that how much time the bar should increase when the battery is in charging mode.??? But that does not work perfectly for me.
Also i read a battery indicator Android application source code of my fried, but the function he used were System Calls based on Andriod Kernel (Linux Kernel).
I need the thing from the scratch....
I need this logic to know............. Because i am working on an Operating system kernel project, which later on will need battery charging monitor.
But the thing i will implement right now is to show just percentage on the Console Screen.
Please give me an idea how i can do it.... Thanks in Advance
Integrating amps over time is not a reliable way to code a battery meter. Use voltage instead.
Refer to the battery's datasheet for a graph of (approximate) voltage vs. charge level.
Obtain an analog input to your CPU. If it's a microcontroller with a built-in ADC, then hopefully that's sufficient.
Plug a reference voltage (e.g. a zener diode) into the analog input. As the power supply voltage decreases, the reference will appear to increase because the ADC only measures voltage proportionally. The CPU may include a built-in reference voltage generator that you can mux to the ADC, or the ADC might always measure relative to a fixed reference instead of rail-to-rail. Consult the ADC manual (or ADC section of micro controller manual) for details.
Ensure that the ADC provides sufficient accuracy.
Sample the battery level and run a simple low-pass filter to eliminate noise, like displayed_level = (displayed_level * 0.95) + (measured_level * 0.05). Run that through an approximate function mapping the apparent reference voltage to the charge level.
Display the charge level.
Related
Im currently developing a low power IoT node based on Contiki-ng running on a TI CC1350 launchpad board. My problem is that my power consumption is always >6mA.
Compiling and running the energest example I can see the MCU radio is always listening, no matter if I compile with MAKE_MAC = MAKE_MAC_NULLMAC and MAKE_NET = MAKE_NET_NULLNET. Running
MAKE_MAC = MAKE_MAC_TSCH or MAKE_MAC = MAKE_MAC_CSMA increases consumption by around 2mA as the CPU is always active, but the radio is never duty cycled.
Is there a way of reducing current consumption for Contiki-ng on this platform?
With Contiki-NG, you have two options:
Use CSMA or NullMAC and turn off the radio from the application code with NETSTACK_RADIO.off().
Use TSCH and make sure the schedule has some idle slots. The radio is going to turn off automatically once the node has joined a TSCH network.
If you the latter, still see big consumption, and you're sure about your code, submit an issue to the Contiki-NG git - there can be an energy consumption bug in the OS specific to the CC1350 board.
I'm studying embedded programming, so I'm new in this field.
Can someone explain how to identify tasks/threads from given system description. Also, how can I estimate timing constraints, execution times... I'm really stuck.
Here is system description I'm working on:
Implement a 2 degree of freedom servo motor system. A two-axis joystick is used for controlling servo motors. Additionally, enable recording and reproducing the user path of the joystick, so that identical movement could be replicated multiple times. It is necessary to support the recording of 3 motion profiles with a length of at least 5 minutes. The profile needs to be recorded to a non-volatile memory, and the recording/playback control via the joystick button. Provide appropriate signalling for the current selected profile and operating mode (recording / playback) using one LED for each profile. In the RT-Thread system realize the necessary drivers on the Raspberry Pi Pico platform as support for the devices used and the application itself that implements the described system with clear separated threads for each of the observed functionalities.
It is tempting to partition functionally, but in practice you should partition based on deadlines, determinism and update rates. For simple systems, that may turn out to be the same as a functional partitioning. The part:
clear separated threads for each of the observed functionalities
May lead you to an inappropriate partitioning. However that may be the partitioning your tutor expects even if it is sub-optimal.
There is probably no single solution, but obvious candidates for tasks are:
Joystick reader,
Servo controller,
Recorder,
Replayer.
Now considering these candidates, it can be seen that the joystick control and replay control are mutually exclusive, and also that replay itself is selected through the joystick button. Therefore it makes sense to make that a single task. Not least because the replayer will communicate with the servo controller in the same way as the joystick. So you might have:
Joystick reader / replayer,
Servo controller,
Recorder.
The recorder is necessarily a separate thread because access to NV memory may be slower and non-deterministic. You would need to feed time and x/y position data in a message queue of sufficient length to ensure the recording does not affect the timing motion control.
It is not clear what king of servo you are using or if your application is responsible for the PID motion control or simply sends a position signal to a servo controller. If the latter, there may be no reason to separate teh servo control from the reader/replayer. In which case you would have:
Joystick reader / replayer / Servo controller,
Recorder.
Other solutions are possible. For example the recorder might record the servo position over time rather the joystick position, and have the servo controller handle replay.
Joystick reader,
Servo controller / replayer,
Recorder.
That makes sense if the Joystick polling and Servo update rates differ, because you'd what to replay what the servo did, not what the joystick did.
I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components.
Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10.
I have a steering wheel game controller.Now I am trying to write a driver for playing a race game such as NFS-17.I knew the game is using Xinput.I will write the driver in C.
My questions:
1) How to send message to the game when I turned the steering wheel.
2) Is it using SendMessage().
3) If use SendMessage(), how to get the game window's handle and which wParam and lParam should I send.
XInput is for xbox360 controllers. For the steering wheel X axis you can use the two Triggers of the gamepad. XInput is a getter/setter nothing more :P It gets you the state of the connected controller and reports if there is something connected or not at your request only, it doesn't monitor or save anything anywhere nor does it send messages to apps that have input focus. Sending messages is your job with an app that you could build.
Now, down to what you actually need. You could write a simple C++ app that would scan the state of the controller, but wait...you don't have a xbox360 controller :P So first test how your app would respond to your steering wheel with its oem driver. If you can't read it as connected on any usb port (check on MSDN the GetInputState()) then try using a windows generic driver (let it install whatever suits it, it might even perceive your steering wheel as a xbox360 they are very similar to a point, difference being the number of axis and buttons with more expensive wheels).
Then when you achieved actually reading the state of your controller (steering wheel), use GetHDC( windowhandle), where windowhandle is retrieved via (I don't remember for sure)FindWindow(name/title of the window). Use alt-tab to check with mouseover whatever title is used for the game's window.
When you are in possession of the window handle and the device context you can send messages to its call back WndProc function corespondent via the window handle and even display texts/draw images/shapes via the device context. The messages should be the according virtualkey codes (VK_UP for UP arrow for instance...look it up) that would be pushed on the keyboard if there wasn't a wheel available. The trick is to simulate a PWM for every degree of the wheel rotation. Just send a high frequency messages of alternating push/release of the VK_LEFT key code for a high degree of wheel turn to the left and for steering lock just send one KEY_DOWN message until the steering lock is removed and then a KEY_UP message followed by the corresponding frequency of alternations again. For low degrees of wheel make larger pauses between the key press/release messages.
A bonus of making a driver is that you can adjust the sensitivity, button correlations, dead zones for your own liking.
On the other hand there are simulators like this already and then again you could build your own steering wheel with 6+1 transmision using just a usb comunication API(read/write data from/to USB port, nothing fancy :) ) from windows, a microcontroller (PIC,atmega, nxp, etc whatever cheaper and with hardware USB at the ready or plus a usb controller), variable resistors for the X,Y, etc axis and some parts for the driving system. Good luck and post on youtube :P!
P.S. Hell it is way too late now, but hey...I felt I needed to write some mambo-jambo I have never done, but which would work... if I wasn't that lazy. :)
Update: xInput may work if compatible (probably high end wheels and joys), but directx knows of steering wheels too, all of them. For me it was the only solution. Microsoft says both XInput and directX should be used as XINput offers audio and more stuff for compatible products.
The algorythm is sound, though open to many implementations. I implemented it successfully for GTA SA, since dedicated tools didn't support my version. Now I have Acc and Brake from the pedals, finally:) I used a timer to achieve a certain frequency (of own choice, empiric stuff), and modulated the pulse width from near 0s to full period (light touch of the pedal and continuously applied pedal to the metal). On every pulse I sent two messages: W/S keys down at the start of the pulse and W/S keys up when the calculated pulse width has reached its end value. The formula for Pulse Width is percentage of total width, the same percentage or modified by a sensitivity setting for instance the pedal had traveled from its max travel distance (DirectInput reports a integer number in[-1000,1000] I think. [-fullbrake,fullacceleration]).
Till now I have been able to create an application where the Kinect sensor is at one place. I have used speech recognition EmguCV (open cv) and Aforge.NET to help me process an image, learn and recognize objects. It all works fine but there is always scope for improvement and I am posing some problems: [Ignore the first three I want the answer for the fourth]
The frame rate is horrible. Its like 5 fps even though it should be like 30 fps. (This is WITHOUT all the processing) My application is running fine, it gets color as well as depth frames from the camera and displays it. Still the frame rate is bad. The samples run awesome, around 25 fps. Even though I ran the exact same code from the samples it wont just budge. :-( [There is no need for code, please tell me the possible problems.]
I would like to create a little robot on which the kinect and my laptop will be mounted on. I tried using the Mindstorms Kit but the lowtorque motors dont do the trick. Please tell me how will I achieve this.
How do I supply power on board? I know that the Kinect uses 12 volts for the motor. But it gets that from an AC adapter. [I would not like to cut my cable and replace it with a 12 volt battery]
The biggest question: How in this world will it navigate. I have done A* and flood-fill algorithms. I read this paper like a thousand times and I got nothing. I have the navigation algorithm in my mind but how on earth will it localize itself? [It should not use GPS or any kind of other sensors, just its eyes i.e. the Kinect]
Helping me will be Awesome. I am a newbie so please don't expect me to know everything. I have been up on the internet for 2 weeks with no luck.
Thanks A lot!
Localisation is a tricky task, as it depends on having prior knowledge of the environment in which your robot will be placed (i.e. a map of your house). While algorithms exist for simultaneous localisation and mapping, they tend to be domain-specific and as such not applicable to the general case of placing a robot in an arbitrary location and having it map its environment autonomously.
However, if your robot does have a rough (probabilistic) idea of what its environment looks like, Monte Carlo localisation is a good choice. On a high level, it goes something like:
Firstly, the robot should make a large number of random guesses (called particles) as to where it could possibly be within its known environment.
With each update from the sensor (i.e. after the robot has moved a short distance), it adjusts the probability that each of its random guesses is correct using a statistical model of its current sensor data. This can work especially well if the robot takes 360ยบ sensor measurements, but this is not completely necessary.
This lecture by Andrew Davison at Imperial College London gives a good overview of the mathematics involved. (The rest of the course will most likely be very interesting to you as well, given what you are trying to create). Good luck!