State Like Action In Behavior Tree - artificial-intelligence

From what I understand on Behavior Trees, each Behavior should be a short goal oriented Action that could be done in a few iterations.
So for example, below is an image of a Behavior Tree:
Now let us assume that the Drive To Enemy behavior takes more than a few iterations in the tree. So on each pass Drive To Enemy is called because it is now in the running state.
The problem is I want to call Evade Enemy if an Enemy is nearby. And considering that Drive To Enemy is always called I never get a chance to call Evade Enemy (Should probably be called Avoid Enemy).
Should I traverse the Tree EACH pass no matter what Action is currently running?
Am I going about this the right way?
What is the proper way of handling such a behavior?

I would say traversing all the way back to the top every time would be your last resort if the idea below doesn't work for you:
As Alex Champandard suggests in his website aigamedev.com, the basic idea instead, is that while you are in the "Drive To Enemy" behaviour, you include some sort of way to run some additional checks to make sure that behaviour should still be continued.
Alex's method is to use the parallel composite: a type of behaviour tree node that runs all its children simultaneously.
It would look like this:
MainSelector:
Evade Enemy
Locate Enemy
Drive in oppposite direction
Parallel
Is Enemy Nearby?
Chase Enemy
Find path to enemy
Drive to enemy
Fire Weapon
Chase Flag
Locate flag
Find Path
Drive to flag
The Parallel node will repeatedly keep on evaluating the "Is Enemy Nearby?" node (at a reasonable pace), even when execution is deep within the "Chase Enemy" sub-tree. The moment "Is Enemy Nearby?" returns failure, the parallel will immediately return failure and skip completing the "Chase Enemy" behaviour. Thus, the next evaluation of your tree will reach the "Evade Enemy" behaviour.
The "Is Enemy Nearby?" condition then, acts as a sort of assertion check or early-out check. Essentially it's like an event-driven feature where your tree can respond to events even while it hasn't completed its iteration yet.
The way I designed my system though, I don't use parallel behaviours (can't multi-thread properly with the 3rd party game engine I use). I have instead a composite that does pretty much the same thing, only it evaluates the checks in-between each of its children's traversal. Like a sort of interleaved, jumping back-and-forth from normal execution to evaluating the checks. Only if the checks fail do we jump back to the top.

Related

Asynchronously exit loop via interupt or similar (MSP430/C)

I have run into a problem that I am rather stumped on because every solution I can think of has an issue that makes it not work fully. I am working on a game on the MSP430FF529 that when first powered up has two images drawn to the screen infinitely using a loop and cycle delays. I would like to have it so that when the user presses the start button (a simple high-edge trigger on a port) that the program immediately stops drawing those screens, no matter what part of the process its in, and starts executing the rest of the code that runs the game.
I could put the function that puts the images on screen in a do while loop but then it wouldn't be asynchronous as the current image being drawn would have to finish before it moved on.
I'd use the break command but I don't think that works in ISRs and only when its directly in the loop.
I could put the entire rest of the program in the ISR I use for the start button press so that the screen drawing is essentially never returned to but thats really messes, poor coding, and would cause a lot of problems later.
Essentially, I want to make it so that when the button is pressed the program will immediately jump to the part of the program that is the actual game and forget about drawing those images on the screen. Is it possible to somehow have an ISR that doesn't return to what was currently happening after the code in the routine is executed? Basically, once the program starts moving forward (the start button is pressed) I don't want to come back to the function that draws the images unless I explicitly call it again.
The only thing I can think of is the goto command, which I feel in this particular instance would not actually be too bad, though I want to avoid using it for fear of it becoming a habit due to it being a poor solution in most cases. However, that might not even work because I have a feeling that using goto in a ISR would really mess up the stack.
Any ideas? Any suggestions are appreciated.
What you want is basically a "context switch". You should modify the program counter pointer and stack pointer which will be restored when you return from the ISR, and then do the normal ISR return so the interrupt mask is cleared, stack is restored, etc. As noted in the comments to your question, this likely requires some manual assembly code.
I'm not familiar with the MSP430, but on other architectures this is in a structure of saved registers on the kernel stack or interrupt-context stack (or maybe just "the stack" on some microcontrollers), or it might be in some special registers, and it's saved automatically by the CPU when it jumps to your ISR. So you have to change these register pointers where they are.
If you relax your requirement from "immediately" to "so fast that the user doesn't notice", you can put the if (button_pressed) into some loop in the image drawing routine.
If you really want to abort the image drawing immediately, you can do so by resetting the MCU (for example, by writing a wrong password to the WDT). In the application initialization code, check if one of the causes of the reset was your own software:
bool start_button = false;
for (;;) {
int cause = SYSRSTIV;
if (cause == SYSRSTIV_WDTKEY)
start_button = true;
if (cause == SYSRSTIV_NONE)
break;
// you might handle debugging of other reset causes here ...
}
if (!start_button)
draw_images();
else
actual_game();
(This assumes that your code never accidentally writes a wrong WDT password, but even if that happens, you're only skipping the intro images.)

Can you detect a debugger attached to your process using Div by Zero

Can you detect whether or not a debugger is attached to your native Windows process by using a high precision timer to time how long it takes to divide an integer by zero?
The rationale is that if no debugger is attached, you get a hard fault, which is handled by hardware and is very fast. If a debugger is attached, you instead get a soft fault, which is percolated up to the OS and eventually the debugger. This is relatively slow.
Since there is absolutely nothing you can do to prevent a determined person from reverse engineering your code, no clever approach you find will be significantly better than calling IsDebuggerPresent()
No. A sufficiently determined attacker would simply host your process in a VM and break in that way.
Besides, you don't need to attach a debugger to attack a program: grabbing a minidump will let an adversary inspect the memory state offline, or using process explorer you can inspect open handles to determine what files are vulnerable.
If you were going to use an exception to determine whether a naive debugger were attached, I'd personally use INT_MIN/-1 to trigger an integer overflow exception. Most don't know about that one.
most debuggers used by reverse engineers come with methods to affect (remove) 99% of the marks left by debuggers, most of these debuggers provided exception filtering, meaning the speed difference would be undetectable.
its more productive to prevent the debugger attaching in the first place, but in the long run you'll never come out ahead unless you make the required effort investment unfeasable.

Execute Large C Program By Generating Intermediate Stages

I have an algorithm that takes 7 days to Run To Completion (and few more algorithms too)
Problem: In order to successfully Run the program, I need continuous power supply. And if out of luck, there is a power loss in the middle, I need to restart it again.
So I would like to ask a way using which I can make my program execute in phases (say each phase generates Results A,B,C,...) and now in case of a power loss I can some how use this intermediate results and continue/Resume the Run from that point.
Problem 2: How will i prevent a file from re opening every time a loop iterates ( fopen was placed in a loop that runs nearly a million times , this was needed as the file is being changed with each iteration)
You can separate it in some source files, and use make.
When each result phase is complete, branch off to a new universe. If the power fails in the new universe, destroy it and travel back in time to the point at which you branched. Repeat until all phases are finished, and then merge your results into the original universe via a transcendental wormhole.
Well, couple of options, I guess:
You split your algorithm along sensible lines with this a defined output from a phase that can be the input to the next phase. Then, configure your algorithm as a workflow (ideally soft-configured through some declaration file.
You add logic to your algorithm by which it knows what it has successfully completed (commited). Then, on failure, you can restart the algorithm and it bins all uncommitted data and restarts from the last commit point.
Note that both these options may draw out your 7hr run time further!
So, to improve the overall runtime, could you also separate your algorithm so that it has "worker" components that can work on "jobs" in parallel. This usually means drawing out some "dumb" but intensive logic (such as a computation) that can be parameterised. Then, you have the option of running your algorithm on a grid/ space/ cloud/ whatever. At least you have options to reduce the run time. Doesn't even need to be a space... just use queues (IBM MQ Series has a C interface) and just have listeners on other boxes listening to your jobs queue and processing your results before persisting the results. You can still phase the algorithm as discussed above too.
Problem 2: Opening the file on each iteration of the loop because it's changed
I may not be best qualified to answer this but doing fopen on each iteration (and fclose) presumably seems wasteful and slow. To answer, or have anyone more qualified answer, I think we'd need to know more about your data.
For instance:
Is it text or binary?
Are you processing records or a stream of text? That is, is it a file of records or a stream of data? (you aren't cracking genes are you? :-)
I ask as, judging by your comment "because it's changed each iteration", would you be better using a random-accessed file. By this, I'm guessing you're re-opening to fseek to a point that you may have passed (in your stream of data) and making a change. However, if you open a file as binary, you can fseek through anywhere in the file using fsetpos and fseek. That is, you can "seek" backwards.
Additionally, if your data is record-based or somehow organised, you could also create an index for it. with this, you could use to fsetpos to set the pointer at the index you're interested in and traverse. Thus, saving time in finding the area of data to change. You could even persist your index in an accompanying index file.
Note that you can write plain text to a binary file. Perhaps worth investigating?
Sounds like classical batch processing problem for me.
You will need to define checkpoints in your application and store the intermediate data until a checkpoint is reached.
Checkpoints could be the row number in a database, or the position inside a file.
Your processing might take longer than now, but it will be more reliable.
In general you should think about the bottleneck in your algo.
For problem 2, you must use two files, it might be that your application will be days faster, if you call fopen 1 million times less...

How to limit framerate in a curses program?

I've been trying to make a game using ncurses.
However, I am stumped with how to make the timing part of my main loop work.
Would someone be able to add some insight into how I could add framerate code to my main loop, while keeping portability and not compromising speed.
Thanks in advance!
The normal way to handle this type of problem I believe is to pass in the duration since the last loop (often called delta) as a parameter to the system. This allows you to update the progress of entities in the game based on the amount of real world time that has passed. For example:
new_position = old_position + delta*speed
This allows entities in your game to move at a constant speed independent of the frame rate of your program.
Assuming you have functionality to update your gamestate after a small period of time, next you need to be able to poll the user for input. If you do not specify otherwise, ncurses will block when you ask for input. To prevent this, look up the init functions here. In particular, the halfdelay() function may be of use of you (it implements a sort of framerate).
Just remember to check for ERR on the return value of getch() when using this mode.

Using threads, how should I deal with something which ideally should happen in sequential order?

I have an image generator which would benefit from running in threads. I am intending to use POSIX threads, and have written some mock up code based on https://computing.llnl.gov/tutorials/pthreads/#ConVarSignal to test things out.
In the intended program, when the GUI is in use, I want the generated lines to appear from the top to the bottom one by one (the image generation can be very slow).
It should also be noted, the data generated in the threads is not the actual image data. The thread data is read and transformed into RGB data and placed into the actual image buffer. And within the GUI, the way the thread generated data is translated to RGB data can be changed during image generation without stopping image generation.
However, there is no guarantee from the thread scheduler that the threads will run in the order I want, which unfortunately makes the transformations of the thread generated data trickier, implying the undesirable solution of keeping an array to hold a bool value to indicate which lines are done.
How should I deal with this?
Currently I have a watcher thread to report when the image is complete (which really should be for a progress bar but I've not got that far yet, it instead uses pthread_cond_wait). And several render threads doing while(next_line());
next_line() does a mutex lock, and gets the value of img_next_line before incrementing it and unlocking the mutex. it then renders the line and does a mutex lock (different to first) to get lines_done checks against height, signals if complete, unlocks and returns 0 if complete or 1 if not.
Given that threads may well be executing in parallel on different cores it's pretty much inevitable that the results will arrive out of order. I think your appraoch of tracking what's complete with a set of flags is quite reasonable.
It's possible that the overall effect might be nicer if used threads in a different granularity. Say give each thread (say) 20 lines to work on rather than one. Then on completion you'd have bigger blocks available to draw, and maybe drawing stripes would look ok?
Just accept that the rows will be done in a non-deterministic order; it sounds like that is happening because they take different lengths of time to render, in which case forcing a completion order will waste CPU time.
This may sound silly but as a user I don't want to see one line rendered slowly from top to bottom. It makes a slow process seem even slower because the user already has completely predicted what will happen next. Better to just render when ready even if it is scattered over the place (either as single lines or better yet as blocks as some have suggested). It makes it look more random and therefore more captivating and less boring to a user like me.

Resources