I brought an HC-05 module. I used it twice or three times, it was working correctly, but now it is not blinking, can any one tell me what the problem is?
Bad or defected item, it happens often.
Maybe the electric power was no stable... or try it on another 8051 micro controller.
But if it blink that mean it's receiving power .. if not, buy another one:
http://www.aliexpress.com/wholesale?catId=0&initiative_id=SB_20160518055547&SearchText=HC-05+module
Related
I am learning to rewrite the code for the microcontroller integrated in the development Sim modules (such as Sim800C) to be able to control switching a device without having to use another MCU. If we can do that, we only need to add 1 transistor to be able to control a relay on and off.
I know at the output of the MCU (eg STM32F...) inside the sim module (eg sim800C) there will be a function to receive and read personal information stored in the sim card and decode the signal emitted from the Mobile base station: For example, when a call comes in, it will generate the sequence:
RING
+CLIP: "01203360211",161,"",0,"",0
....
So can anyone help me to know when the MCU outputs a string like the above, what will be its input 11110000,...? or something special in some form? I'm really confused. Thanks everyone!
im fairly new to this thing and my grammar isnt good,but here we go.
Im planning to light up a clear casted figurine for my school project by using addressable rgb sk6812,its 2020 in size and pretty convenient for my figure since its fairly small (15cm) and kinda cramped. Im trying to light it up with some effect like a burst going on,please see my tinkercad pic,sorry its the easiest method i can do for now attached design
As you can see i try to spread the whole led in every limb,the no 1 (first to get the data in) led are placed on the chest and im thinking if i can just spread the data out line from led no 1 to the next led around it and so on would make a ripple/burst like effect,also sorry for the cramped cable since my sk6812 only had 4 pin instead of bigger old 6 neopixel on tinkercad so im making it as close as possible with my situation,will it work without any future problem? the thing gonna be on for 3 days straight,also the attiny is just an example,i'll use a 5-12v powered led strip controller for the real deal like this one BTF SP105E Bluetooth Controller that probably already have everything in check for powering the led (its phone controlled too!).
Do i need bypass capacitor for each of led? or any extra resistor? my friend said the controller are packed with so many pattern and could be so fast that he afraid the led lifespan would shortened,but since both of us are kinda new i would like to hear for some experienced people here.
Here's my tinkercad sketch link
Any help would be great! Thnak you.
Looks good!
You are supposed to have a capacitor at each LED to help smooth out voltage when it is turning on and off rapidly, but you might be able to get away with not using them if you have thick, short wires and/or you do not turn the LEDs on too brightly.
Also, the LED boards you show in the picture already have capacitors on them. If you are not using these boards, it is pretty easy to solder a capacitor directly onto each LED.
You do not need any resistors.
In a professional context, I have to use the vl53L0x. This sensor was released recently, along with it's API, meaning that there's no help on the internet yet :
http://www.st.com/content/st_com/en/products/embedded-software/proximity-sensors-software/stsw-img005.html
This API contains some source and headers file, that I compiled with the gcc. It works fine, despite clearly lacking comments. I flash the memory of a stm32 (NUCLEO-F401RE), which controls a vl53L0x sensor via an I2C bus. I now want to add more vl53L0x sensors on the same I2C bus, and refer to this document (if you want to read it, go directly to the bottom half of the page 5, the wiring is already done) :
http://www.st.com/content/ccc/resource/technical/document/application_note/group0/0e/0a/96/1b/82/19/4f/c2/DM00280486/files/DM00280486.pdf/jcr:content/translations/en.DM00280486.pdf
The principle, that I already applied on other sensors, is that they all start with the same address. You then have to activate one, change it's address, then activate the next one, change it's address, etc.
Unfortunately, ST Microelectronics didn't publish the list of the I2C registers, so I have to use their API to control multiple sensors. The document linked above explains how to do so. Among other things, it specifies :
In vl53L0x_platform.h API file
• Set VL53L0x_SINGLE_DEVICE_DRIVER macro to 0 so that API implementation will
be automatically adapted to a multi-device context.
I looked everywhere in the API folder, I was not able to find any reference to a VL53L0x_SINGLE_DEVICE_DRIVER macro. Setting it to 0 won't change anything, as this string is not present anywhere in the API files. Did anyone run into a similar problem ?
I'm working on the same thing. It seems that you're further ahead than I am. However, putting this in my while(1) loop seems to make both the sensors work.
ResetAndDetectSensor(0);
TimeStamp_Reset();
The guide says that in order to use all the sensors simultaneously, you need to pull the XSHUT pin high for all the sensors, reset the timestamp and then pick up the sensor which actually detects something.
I've got a little side project going on using SDL2/SDL_mixer and a couple other sound libraries. I've been trying for a while now to synchronize my audio and video but haven't been able to get it anywhere near successfully. All new to this stuff so forgive the poorman's logic and coding. At first I thought to set the delay to SDL_Delay(30) after every frame, and then a few other numbers in that range. Not quite right. Then I tried doing it by getting Ticks. Where I would get the difference between current_ticks and last_ticks and set a delay if the delta between ticks was <=30 and set the delay to 30-delta. Still not quite right (by far). Hoping someone on here with more experience might guide me in the right direction. In regards to the video, it's a visualizer of course, seems like a popular beginners project.
The basic way you synchronize audio and video is that you choose one to use as a timer source and present the other according to that timer. The easiest is generally audio, but because it's generally buffered ahead, you need some method of measuring what time in the audio stream is actually coming out of the speakers. Once you get that, it's just a matter of waiting until the audio reaches the right time for the next video frame and displaying it.
Till now I have been able to create an application where the Kinect sensor is at one place. I have used speech recognition EmguCV (open cv) and Aforge.NET to help me process an image, learn and recognize objects. It all works fine but there is always scope for improvement and I am posing some problems: [Ignore the first three I want the answer for the fourth]
The frame rate is horrible. Its like 5 fps even though it should be like 30 fps. (This is WITHOUT all the processing) My application is running fine, it gets color as well as depth frames from the camera and displays it. Still the frame rate is bad. The samples run awesome, around 25 fps. Even though I ran the exact same code from the samples it wont just budge. :-( [There is no need for code, please tell me the possible problems.]
I would like to create a little robot on which the kinect and my laptop will be mounted on. I tried using the Mindstorms Kit but the lowtorque motors dont do the trick. Please tell me how will I achieve this.
How do I supply power on board? I know that the Kinect uses 12 volts for the motor. But it gets that from an AC adapter. [I would not like to cut my cable and replace it with a 12 volt battery]
The biggest question: How in this world will it navigate. I have done A* and flood-fill algorithms. I read this paper like a thousand times and I got nothing. I have the navigation algorithm in my mind but how on earth will it localize itself? [It should not use GPS or any kind of other sensors, just its eyes i.e. the Kinect]
Helping me will be Awesome. I am a newbie so please don't expect me to know everything. I have been up on the internet for 2 weeks with no luck.
Thanks A lot!
Localisation is a tricky task, as it depends on having prior knowledge of the environment in which your robot will be placed (i.e. a map of your house). While algorithms exist for simultaneous localisation and mapping, they tend to be domain-specific and as such not applicable to the general case of placing a robot in an arbitrary location and having it map its environment autonomously.
However, if your robot does have a rough (probabilistic) idea of what its environment looks like, Monte Carlo localisation is a good choice. On a high level, it goes something like:
Firstly, the robot should make a large number of random guesses (called particles) as to where it could possibly be within its known environment.
With each update from the sensor (i.e. after the robot has moved a short distance), it adjusts the probability that each of its random guesses is correct using a statistical model of its current sensor data. This can work especially well if the robot takes 360º sensor measurements, but this is not completely necessary.
This lecture by Andrew Davison at Imperial College London gives a good overview of the mathematics involved. (The rest of the course will most likely be very interesting to you as well, given what you are trying to create). Good luck!