Homeworld 2 map scripting with lua 4 unknown errors - lua-4.0

I am scripting maps for the game Homeworld 2, the original NOT remastered (since I don't have and can't get remastered).
Problem is, I'm getting an error in the script somewhere, which I'm pretty sure is a syntax error because it makes the game crash before the main menu (from experience other error types usually cause a crash at the time the map loads).
I have tried using codepad.org and checked out other similar sites, but they don't let you choose lua version 4 (which is what Homeworld 2 uses), so I can only assume they use lua 5, which means their utility is rather limited. Codepad said my code was good despite this.
I have skimmed the lua 4 documentation, but honestly I have no idea what I'm looking for.
Thus far, my programming experience is c++, and the little bit of lua I gleaned from example Homeworld 2 maps and my quick perusal of the documentation.
As near as I can tell the code should be good. I do recall hearing something ages ago that not all of lua was valid in Homeworld 2 and that Homeworld 2 had additional limits on what lua could do, though I haven't been able to find it again.
All my research in trying to solve this issue from searching the Homeworld end of things keeps bringing up the remastered stuff rather than the original homeworld 2 stuff, and remastered has changed things about some. Even then, I still haven't found any topics on the limits of lua scripting.
The script I made basically makes a random map of a randomly chosen style (i.e. the resources might be evenly distributed throughout the map, be concentrated in a big field, or come in clusters, or even a big ring.).
My first version of the map worked, but was messy and disorganized, so I rewrote the whole thing to be neater and easier to tweak (i.e. I moved many variables to the top so they can be easily found).
All the core code should in theory be the same, only with certain things moved around, and better commented.
I did put some of the code into functions and called the functions, but I can't find anything that says I did it wrong.
So what I need is something that can either check lua 4 code for errors (the ones that can be found without running it anyway), or something that rather clearly shows how lua 4 did things different from lua 5. Of course, if anyone knows anything about Homeworld 2 specific limitations, that would be wonderful.

If you have experience with C++, it shouldn't be a problem for you to compile a standalone version of Lua 4.0.1 (it's very easy, and the codebase is highly standards compliant, compared to typical C/C++ projects). This would certainly let you at least check for syntax errors (i.e. "errors which can be found without running it").

Related

Implementing a simple code in Microsemis Soft Console

I would like to get into the field of FPGAs a bit. I currently have a PolarFire Everest Dev Board and would like to try something small on it for testing purposes. My current level is very low, i.e. complete beginner. My first working project was a counter that counts binary to 15 and outputs it via the LEDs of the board. Now I wanted to play with RISC-V. Unfortunately I can't find anything on the internet that meets my expectations and almost nothing is "beginner friendly". My current goal is actually just to implement something on the level of a Hello World program in C via the SoftConsole. Unfortunately I have no idea how to go about it. Can anyone help me or recommend a good entry on the internet? Most of the stuff is either unusable, requires licenses I can't get, or is simply no longer available (which happens to me quite often with PDFs from Microsemi).
Since I don't really know what I could do with it to start with, I don't have any code yet that I would like to include. The plan would actually be to create something where I also get feedback via the board that something has been done. Later when I have more understanding SRAMs should be managed with it.

Best way to identify system library commands in Lexer/Bison

I'm writing an intepreter for a new programming language. The language's syntax is very simple and the "system library" commands are treated as simple identifiers (even if is no special construct, but a function like everything else - only pre-defined internally). And no, this is not yet-another-one of the 1 million Lisp's out there.
The question is:
Should I have the Lexer catch them, or should I do it in the AST-construction code?
What I've done so far:
I tried recognizing all of them in my Lexer script, and they are a lot already - over 200. I send the same token back to Bison (SYSTEM_CMD) only with a different value (basically a numeric index pointing to the array of system commands where they are all stored).
As an approach, I think this makes it much faster than having to look up every single one of them in a hash and see if it's a system command.
The thing is the Lexer is getting quite huge (in term of resulting binary filesize I mean) rather fast. And I obviously don't like it.
Given that my focus is something both lightning-fast (I'm already quite good with that) and small enough to be embedded, what would be the most recommended approach?

c99 dynamic array

I'm writing a very small, project-specific OpenGLES engine for iphone and I really need to use a good, solid, and proven dynamic array library/macro in c99 dialect. (No C++, Obj-C, stl whatsoever)
It's strongly necessary for render batch and polygon mesh, so it should be able to handle various types of data, and additionally causes minimal overhead when array size changes and new data is inserted.
I've been searching around and found two candidates for my need.
the first one is from ccCArray from Cocos2d.
and another one is utarray written by Troy D. Hanson.
ccCArray IS rock solid, thoroughly proven by community. utarray looks fine but I cannot find anyone actually uses it.
Any more suggestion?
A library ?! A C++ template would be more than suitable for this need. I'd say about AT MOST 15 functions (excluding alternative constructors and const getters), and you're done. Also able to use it for ANY type, ANY size and ANY size type (byte, int etc.) And it's just one file: a .h or, better said, a .hpp
Any reason you're rejecting it ? Seems like you want to make life harder for yourself :)

Parsing: library functions, FSM, explode() or lex/yacc?

When I have to parse text (e.g. config files or other rather simple/descriptive languages), there are several solutions that come to my mind:
using library functions, e.g. strtok(), sscanf()
a finite state machine which processes one char at a time, tokenizing and parsing
using the explode() function I once wrote out of pure boredom
using lex/yacc (read: flex/bison) to generate an appropriate parser
I don't like the "library functions" approach. It feels clumsy and awkward. explode(), while it doesn't take much new code, feels even more blown up. And flex/bison often seems like sheer overkill.
I usually implement a FSM, but at the same time I already feel sorry for the poor guy that may have to maintain my code at a later point.
Hence my question:
What is the best way to parse relatively simple text files?
Does it matter at all?
Is there a commonly agreed-upon approach?
I'm going to break the rules a bit and answer your questions out of order.
Is there a commonly agreed-upon approach?
Absolutely not. IMHO the solution you choose should depend on (to name a few) your text, your timeframe, your experience, even your personality. If the text is simple enough to make flex and bison overkill, maybe C is itself overkill. Is it more important to be fast, or robust? Does it need to be maintained, or can it start quick and dirty? Are you a passionate C user, or can you be enticed away with the right language features? &c., &c.
Does it matter at all?
Again, this is something only you can answer. If you're working closely with a team of people, with particular skills and abilities, and the parser is important and needs to be maintained, it sure does matter! If you're writing something "out of pure boredom," I would suggest that it doesn't matter at all, no. :-)
What is the best way to parse relatively simple text files?
Well, I don't know that you're going to like my answer. Maybe first read some of the other fine answers here.
No, really, go ahead. I'll wait.
Ah, you're back and relaxed. Let's ease into things, shall we?
Never write it in 'C' if you can do it in 'awk';
Never do it in 'awk' if 'sed' can handle it;
Never use 'sed' when 'tr' can do the job;
Never invoke 'tr' when 'cat' is sufficient;
Avoid using 'cat' whenever possible.
-- Taylor's Laws of Programming
If you're writing it in C, but C feels like the wrong tool...it really might be the wrong tool. awk or perl will likely do what you're trying to do without all the aggravation. You may even be able to do it with cut or something similar.
On the other hand, if you're writing it in C, you probably have a good reason to write it in C. Maybe your parser is a tiny part of a much larger system, which, for the sake of argument, is embedded, in a refrigerator, on the moon. Or maybe you loooove C. You may even hate awk and perl, heaven forfend.
If you don't hate awk and perl, you may want to embed them into your C program. This is doable, in principle--I've never done it myself. For awk, try libmawk. For perl, there are proably a few ways (TMTOWTDI). You can run perl separately using popen to start it, or you can actually embed a Perl interpreter into your C program--see man perlembed.
Anyhow, as I've said, "the best way to parse" entirely depends on you and your team, the problem space, and your approach to the issue. What I can offer is my opinion.
I'm going to assume that in your C-only solutions (library functions and FSM (considering your explode to essentially be a library function)) you've already done your best at isolating the relevant code, designing the code and files well, and so forth.
Even so, I'm going to recommend lex and yacc.
Library functions feel "clumsy and awkward." A state machine seems unmaintainable. But you say that lex and yacc feel like overkill.
I think you should approach your complaints differently. What you're really doing is specifying a FSM. However, you're also hiring someone to write and maintain it for you, thereby solving most of the maintainability problem. Overkill? Did I mention they'll work for free?
I suspect, but do not know, that the reason lex and yacc originally felt like overkill was that your config / simple files just felt too, well, simple. If I'm right (a big if), you may be able to do most of your work in the lexer. (It's even conceivable that you can do all of your work in the lexer, but I know nothing about your input.) If your input is not only simple but widespread, you may be able to find a lexer/parser combination freely available for what you need.
In short: if you can do this not in C, try something else. If you want C, use lex and yacc--they have a little overhead, but they're a very good solution.
If you can get it to work, I'd go with an FSM, but with a huge assist from Perl-compatible regular expressions. This library is easy to understand, and you ought to be able to trim back sufficient extraneous spaghetti to give your monster that aerodynamic flair to which all flying monsters aspire. That, and plenty of comments in well-structured spaghetti, ought to make your code-maintaining successor comfortable. (And, as I'm sure you know, that code-maintaining successor is you after six months, when you've moved on to something else and the details of this code have slipped your mind.)
My short answer is to use the right too for the problem. If you have configuration files use existing standards and formats e.g. ini Files and parse them using Boost program_options.
If you enter the world of "own" languages use lex/yacc, since they provide you with the required features, but you have to consider the cost of maintaining the grammar and language implementation.
As a result I would recommend to further narrow you problem scope to find the right tool.

Converting Win16 C code to Win32

In general, what needs to be done to convert a 16 bit Windows program to Win32? I'm sure I'm not the only person to inherit a codebase and be stunned to find 16-bit code lurking in the corners.
The code in question is C.
The meanings of wParam and lParam have changed in many places. I strongly encourage you to be paranoid and convert as much as possible to use message crackers. They will save you no end of headaches. If there is only one piece of advice I could give you, this would be it.
As long as you're using message crackers, also enable STRICT. It'll help you catch the Win16 code base using int where it should be using HWND, HANDLE, or something else. Converting these will greatly help with #9 on this list.
hPrevInstance is useless. Make sure it's not used.
Make sure you're using Unicode-friendly calls. That doesn't mean you need to convert everything to TCHARs, but means you better replace OpenFile, _lopen, and _lcreat with CreateFile, to name the obvious
LibMain is now DllMain, and the entire library format and export conventions are different
Win16 had no VMM. GlobalAlloc, LocalAlloc, GlobalFree, and LocalFree should be replaced with more modern equivalents. When done, clean up calls to LocalLock, LocalUnlock and friends; they're now useless. Not that I can imagine your app doing this, but make sure you don't depend on WM_COMPACTING while you're there.
Win16 also had no memory protection. Make sure you're not using SendMessage or PostMessage to send pointers to out-of-process windows. You'll need to switch to a more modern IPC mechanism, such as pipes or memory-mapped files.
Win16 also lacked preemptive multitasking. If you wanted a quick answer from another window, it was totally cool to call SendMessage and wait for the message to be processed. That may be a bad idea now. Consider whether PostMessage isn't a better option.
Pointer and integer sizes change. Remember to check carefully anywhere you're reading or writing data to disk—especially if they're Win16 structures. You'll need to manually redo them to handle the shorter values. Again, the least painful way to deal with this will be to use message crackers where possible. Otherwise, you'll need to manually hunt down and convert int to DWORD and so on where applicable.
Finally, when you've nailed the obvious, consider enabling 64-bit compilation checks. A lot of the issues faced with going from 16 to 32 bits are the same as going from 32 to 64, and Visual C++ is actually pretty smart these days. Not only will you catch some lingering issues; you'll get yourself ready for your eventual Win64 migration, too.
EDIT: As #ChrisN points out, the official guide for porting Win16 apps to Win32 is available archived, and both fleshes out and adds to my points above.
Apart from getting your build environment right, Here are few specifics you will need to address:
structs containing ints will need to change to short or widen from 16 to 32 bits. If you change the size of the structure and this is loaded/saved to disk you will need write data file upgrade code.
Per window data is often stored with the window handle using GWL_USERDATA. If you widen some of the data to 32 bits, your offsets will change.
POINT & SIZE structures are 64 bits in Win32. In Win16 they were 32 bits and could be returned as a DWORD (caller would split return value into two 16 bit values). This no longer works in Win32 (i.e. Win32 does not return 64 bit results) and the functions were changed to accept a pointers to store the return values. You will need to edit all of these. APIs like GetTextExtent are affected by this. This same issue also applies to some Windows messages.
The use of INI files is discouraged in Win32 in favour of the registry. While the INI file functions still work you will need to be careful with Vista issues. 16 bit programs often stored their INI file in the Windows system directory.
This is just a few of the issues I can recall. It has been over a decade since I did any Win32 porting. Once you get into it it is quite quick. Each codebase will have its own "feel" when it comes to porting which you will get used to. You will probably even find a few bugs along the way.
There was a definitive guide in the article Porting 16-Bit Code to 32-Bit Windows on MSDN.
The original win32 sdk had a tool that scanned source code and flagged lines that needed to be changed, but I can't remember the name of the tool.
When I've had to do this in the past, I've used a brute force technique - i.e.:
1 - update makefiles or build environment to use 32 bit compiler and linker. Optionally, just create a new project in your IDE (I use Visual Studio), and add the files manually.
2 - build
3 - fix errors
4 - repeat 2&3 until done
The pain of the process depends on the application you are migrating. I've converted 10,000 line programs in an hour, and 75,000 line programs in less than a week. I've also had some small utilities that I just gave up on and rewrote (mostly) from scratch.
I agree with Alan that trial and error is probably the best way.
Here are some good tips.
Agreed that the compiler will probably catch most of the errors. Also, if you are using "near" and "far" pointers you can remove those designations -- a pointer is just a pointer in Win32.

Resources