How to time a case structure in labview? - timer

I am using cDAQ 9191 along with module 9205 for data acquisition. Attached with the post is figure of my LV code for data acquisition and saving it to a measurement file. It is working fine. I start with running the code and i can see the waveform of all my 9 channels. Afterwards what i need is to press record button so that write data is written/stored in TDMS file for only 6 seconds. and this should happen automatically and auto stop of code.
The block diagram of code is shown in figure using elapse time.
The tdms shows how data is saved
The tdms file saved after running this code. TDMS file viewer shows clearly that each group of file contains 200 samples. For 6 sec data i should have 6000 samples in total as sampling rate is set to 1000 in DAQ settings. According to auto saving for 6 seconds only 6000 samples should be appearing in groups of tdms file shown in figure. 200 samples each group so total of 30 groups . But each time i run # of groups changes. how to fix this?

So you want to automatically just save 6 seconds of data? You could work with the Elapsed Time Express VI
http://zone.ni.com/reference/en-XX/help/371361P-01/lvexpress/elapsed_time/
Set the start time as zero and the time target to 6. After 6 seconds the VI will give you the boolean true, otherwise false. Then you will need a while loop. While the VI returns the boolean false write to file. The while loop should be placed in your case structure.
You also should change the mechanical action of your boolean object named switch to Latch when pressed. See this link http://zone.ni.com/reference/en-XX/help/371361G-01/lvhowto/changemechactofboolswitch/
Hope it helps

Related

Flink - How to combine process time and count trigger?

I have a Flink streaming job, where I am creating windows based on some keys and adding the data points to a data point.
.window(SlidingProcessingTimeWindows.of(Time.days(1), Time.minutes(5)))
.trigger(CountTrigger.of(5))
.window(<ProcessWindowFunction>)
I'm using the above piece of code for creating sliding window of size 1 day with a slide of 5 minutes. Als, count trigger is triggering the process function once 5 data points are accumulated.
In addition to this, I want to trigger the process function for every slide that happens. This means, till 1 day of data points are accumulated (window size), CountTrigger shall trigger the process function and once 1 day worth points are created and window slides for every 5 minutes, I want to trigger the process function for every data point instead of waiting for CountTrigger to accumulate 10 data points. Can someone please help me on how to do this?
Be aware that this is going to be pretty painful. Every event is going to be assigned to a total of 288 windows (24 hours / 5 minutes). This means that every event is going to trigger 288 calls to your ProcessWindowFunction.
If you find you need to optimize this, you can probably get better performance with a carefully implemented KeyedProcessFunction.
Extend org.apache.flink.streaming.api.windowing.triggers.CountTrigger and override onProcessingTime method. Implement your processing time logic there. Then use this trigger instead of plain CountTrigger.

Control WVGA display with stm32f429-discovery LTDC

I am trying to output some data on the 7 inch TFT-LCD display (MCT070PC12W800480LML) using LCD-TFT display controller (LTDC 18 bits) on STM32F4.
LTDC interface setting are configured in CubeMx. In the program lcd data buffer is created with some values and it's starting address is mapped to the LTDC frame buffer start address.
At this moment display doesn't react to data sent by the LTDC. It only shows white and black strips, after i connect ground and power for digital circuit to the 3 volts source. VLED+ is connected to the 9 volts source. The VSYNC, HSYNC and CLOCK signals are generated by the LTDC and they match with specified values. I measured them on LCD strip, so the connection should be right. I also tried putting pulse on the LCD reset pin, but that doesn't make any sense.
The timing setting might be wrong.
LTDC clock is 33 MHz.
Here is the link to the diplay datasheet http://www.farnell.com/datasheets/2151568.pdf?_ga=2.128714188.1569403307.1506674811-10787525.1500902348 I saw some other WVGA displays using the same timing for synchronization signals, so i assume that timings are standard for that kind of displays.
Maybe signal polarity is wrong or i am missing something else. The program i am using now, worked on stm32f429-discovery build in LCD i just changed the timings. Any suggestions?
Thank you.
It could be something else, but I can see a problem with your timing values.
The back porch for both horizontal and vertical includes the sync pulses, but there must be a sync pulse width. My observation is that you have tried to get the total clocks for h = 1056 and v = 525 as per the data sheet by setting the sync pulses to 0. That won't work.
I would make the hsync pulse 20 and vysnc 10. The total clocks will be the same, but it is not critical that they match the spec sheet.

0xC00D4A44 MF_E_SINK_NO_SAMPLES_PROCESSED with MPEG 4 sink

I am running out of ideas on why I am getting this HRESULT.
I have a pipeline in Media Foundation. A file is loaded through the source resolver. I am using the media session.
Here is my general pipeline:
Source Reader -> Decoder -> Color Converter (to RGB24) -> Custom MFT -> Color Converter (To YUY2) -> H264 Encoder -> Mpeg 4 Sink
In my custom MFT I do some editing to the frames. One of the tasks of the MFT is to filter samples and drop the undesired ones.
This pipeline is used to trim video and output an MP4 file.
For example if the user wants to trim 3 seconds from the 10 second marker, my MFT will read the uncompressed sample time and discard it by asking for more samples. If a sample is in range, it will be passed to the next color converter. My MFT handles frames in RGB24, hence the reason for the initial color converter. The second color converter transforms the color space for the H264 encoder. I am using the High Profile Level 4.1 encoder.
The pipeline gets setup properly. All of the frames get passed to the sink and I have a wrapper for the MPEG4 sink. I see that the BeginFinalize and EndFinalize gets called.
However on some of my trim operations, the EndFinalize with spit out the MF_E_SINK_NO_SAMPLES_PROCESSED. I think it is random. It usually happens when a range not close to the beginning is selected.
It might be due to sample times. I am rebasing the sample times and duration.
For example, if the adjusted frame duration is 50ms (selected by user), I will grab the first acceptable sample (let's say 1500ms) and rebase it to 0. The next one will be 1550ms in my MFT and then set to 50ms and so on. So frame times are set in 50ms increments.
Is this approach correct? Could it be that the sink is not receiving enough samples to write the headers and finalize the file?
As mentioned, it work in some cases and it fails in most. I am running my code on Windows 10.
I tried to implement the same task using IMFMediaSession/IMFtopology, but had the same problems you faced. I think that IMFMediaSession either modifies the timestamps outside your MFT, or expects them not to be modified by your MFT.
So in order to make this work, I took the IMFSourceReader->IMFSinkWriter approach.
This way I could modify the timestmaps of the samples read from the reader and pass to the writer only those that fall into the given range.
Furthermore, you can take a look at the old MFCopy example. It does exactly the file trimming as you described it. You can download it from here: https://sourceforge.net/projects/mfnode/

Netlogo - how to show the final sum of ticks passed

I'm currently working on a simulation where i've programmed turtles to move faster when they reach a specific number of ticks. Given this reason i have to reset the ticks to make the command go over and over again. What i want to do is to see the final sum of ticks that has been run through the entire simulation but with the reset tick command, i can only see how many ticks has been run since the last time i used the move command on my turtles. This makes it impossible for me to use the "monitor" in my interface to show ticks. So how do i see the final count of ticks that has been run since i started the simulation, and not only since the last time it reset the ticks.
To do what you describe, you could create a global variable (say, tickTotal), increment it each tick, and add a monitor to the interface.
But what you should do instead is stop resetting ticks. Instead, use the mod command to control your cyclical response of turtle movement to the tick count.

Timer to represent AI reaction times

I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.
The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)
The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?

Resources