Is there a way I can create a virtual instance of gcc compiler on the client browser when the client opens my website??
By doing so, I can directly pass the user .c file as argument to my compiler instance and then execute it without having to make a POST call to server and execute the file there???
Originally I userstood your question to be targeting the native platform on which the browser is running:
Consider that Browsers may be running
on many different platforms,
operatinng systems and processor
architectures. Compiling C in the way
you describe might be technically
doable, but practically infeasible.
I was basing "practically infeasible" on the difficulty of supporting the plethora of widely used browser platforms.
Now I understand that you are thinking more on the lines of targeting a virtual environment. I'll amend practically infeasible to "a large amount of work".
If I understand your intent it is to run a C compiler which emits, shall we say, x86 compiled code and executes it. So to do that we need an emulation of the x86 environment in, say, JavaScript. What's more I think your intent is that the conmpiler itself execute in this environment, so that you can re-use gcc. So you'll need to emulate a file-system too. It's "obvious" that this could be done, but it really is a lot of work. Is it really worth it?
Competition code is small (I guess) even with lots of programmers the number of simultaneous compiles can't be so huge with a decent queued request system, a touch of Ajax, and a bit of back-end scaling how costly is it to support the expected population? What's the ratio of developers to back end systems?
Anyway, if I were to address this problem I'd go for taking the code for an opensource browser and melding in the gcc code. Produce a compiler/browser hybrid. Give that to the developers and tell them "Use this and get zippy compilation speeds, or use your own browser and join the queue."
You're not going to use GCC as it is written for this. AT BEST, you could accomplish something simalar if you had a compiler written in Java that targeted the JVM and could be ran as an applet. I don't know what it would take to get something like this working but, I suspect it would take a bit work to get it up and going. As far as I know nothing currently exist that does this.
Perhaps using a jsLinux in background? There the making process can run in the virtual machine. Communication could be done by extending the clipboard transfer, perhaps into multiple pipes...
I would be interested in javascript based gcc solutions, too.
Related
According to most benchmarks, Intel's Clear Linux is way faster than other distributions, mostly thanks to a GCC feature called Function Multi-Versioning. Right now the method they use is to compile the code, analyze which function contains vectorized loops, then patch the code with FMV attributes and compile it again.
How feasible will it be for GCC to do it automatically? For example, by passing -mmultiarch=sandybridge,skylake (or a similar -m option listing CPU extensions like AVX and AVX2).
Right now I'm interested in two usage scenarios:
Use this option for our large math-heavy program for delivering releases to our customers. I don't want to pollute the code with non-standard attributes and I don't want to modify the third-party libraries we use.
The other Linux distributions will be able to do this easily, without patching the code as Intel does. This should give all Linux users massive performance gains.
No, but it doesn't matter. There's very, very little code that will actually benefit from this; for the most part by doing it globally you'll just (without special effort to sort matching versions in pages together) make your system much more memory-constrained and slower due to the huge increase in code size. Most actual loads aren't even CPU-bound; they're syscall-overhead-bound, GPU-bound, IO-bound, etc. And many of the modern ones that are CPU-bound aren't running precompiled code but JIT'd code (i.e. everything running in a browser, whether that's your real browser or the outdated and unpatched fork of Chrome in every Electron app).
I need to write some C functions that will be called by a java program running on a CenOS Linux server, as part of a web application. The server is a hosted dedicated server sitting in another physical location, far away from me.
Do I need to develop the C stuff on the server directly, that is, doing development tunneling into the server? Or can I develop the C program on a Mac or Windows PC in my office, then once everything is working fine, store the final results on the server for use? If the latter, does it limit the choices for development environment in any way? That is, which compiler I should use, or any settings in the IDE or compiler I need to worry about since the development environment will be different than the production environment?
If I use Xcode version 3 on a Mac, it uses GCC by default, whereas Xcode version 4 uses LLVM-GCC to compile. Does the choice of compiler matter assuming I'm using C99 standard things? I don't want the code to be dependent on the development environment since I can't guarantee it'll stay the same in the future. Can I switch the compiler manually in Xcode somehow to verify the code works in GCC as well as LLVM?
Ignoring windows, things are pretty portable across mac/linux. If you develop it on mac in whatever development environment you want (I personally use TextWrangler and GCC from the command line.
Once you develop your software, it's a simple matter of copying the file to your remote server and compiling it there.
You may or may not need to change a few things. The only portability issue I've run into was mac's socket() using PF_ instead of AF_ (Mac will still accept AF_ but it doesn't advertise it in it's manpage, and other systems will not necessarily accept PF_) and sranddev() not being available on some systems; both of which were very easily resolvable.
If, however, you wanted to write the software directly on the remote box, its definitely not a hard thing to do, I would just ssh there and take your pick of text editors (usually vi or emacs) and compilers (usually gcc).
In general, for programs that are just traditional unix command line things, I tend to avoid Xcode as much as possible because it likes to hide things, and IMO its a good thing to actually understand what is going on behind the scenes. (Especially if you use other *nix systems.)
Whatever you do, it will need to be recompiled on the server.
You can probably create code that's runnable/testable under both environments, although you may have to #ifdef around compatibility issues. How much of that, if any, depends a lot on what you're actually writing.
Does the choice of compiler matter assuming I'm using C99 standard things?
Yes: Microsoft, AFAIK, still doesn't fully support C99 (but maybe that's changed in the latest MSVC). Also, you have to resist the temptation of using non-standard features just because they are there. OTOH, a local build env might force you to write portable programs.
The choice depends on how your program is going to communicate with the larger system, but developing at least parts locally is probably the most convenient option.
a previous relevant question from me is here Reverse Engineering old paint programs
I have set up my base of operations here: http://animatorpro.org
wiki coming soon.
Okay, so now I have a 300,000 line legacy MSDOS codebase. It's sort of a "be careful what you wish for" situation. I am not an experienced C programmer. I'm not entirely inexperienced either, but for all intents and purposes I'm a noob to the language and in particular the intricacies of its libraries. I am especially ignorant of the vagaries of the differences between C programs written specifically for MSDOS and programs that are cross platform. However I have been studying this code base for over a year now, and this is what I know about Animator Pro:
Compilers and tools used:
Watcom C compiler
tcmake (make program from Turbo C)
386asm, a specialised assembler for the Phar Lap dos extender
and of course, the Phar Lap dos extender itself.
a selection of obscure dos utilities
Much of the compilation seems to be driven by batch files. Though I have obtained copies of all these tools, I have not yet succeeded at compiling it. (though I have compiled its older brother, autodesk animator original.
It's got a plugin system that replicates DLL before DLL's were available, based on REX. The plugin system handles:
Video Drivers (with a plethora of included VESA drivers)
Input drivers (including wacom tablets, and keyboards)
Drawing Tools
Inks (Like photoshop's filters, or blending modes)
Scripting Addons (essentially compiled scripts)
File formats
It's got its own script interpreter named POCO, based on the C language- The scripting language has enough power to do virtually all the things the plugin system can do- Just slower.
Given this information, this is my development plan. Please criticise this. The source code is available in the link above, so you can easily, if you are so inclined, assess the situation yourself.
Compile with its original tools.
Switch to using DJGPP, and make the necessary changes to get it to compile with that, plus the original assembler.
Include the Allegro.cc "Game" library, and switch over as much functionality to that library as possible- Perhaps by simply writing new video and input drivers that use the Allegro API. I'm thinking allegro rather than SDL because: there is a DOS version of Allegro, and fascinatingly, one of its core functions is the ability to play Animator Pro's native format FLIC.
Hopefully after 3, I will have eliminated most or all of the Assembler in the project. I say hopefully, because it's in an obscure dialect that doesn't assemble in any modern free assembler without significant modification. I have tried them all. Whatever is left gets converted to assemble in NASM, or to C code if I can define the assembler's actual function.
Switch the dos extender from Phar Lap to HX Dos http://www.japheth.de/HX.html, Which promises to replicate as much of the WIN32 api as possible. Then make all the necessary code changes for that to work.
Switch to the win32 version of Allegro.cc, assuming that the win32 version can run on top of HXDos. Make any further necessary changes
Modify the plugin system to use some kind of standard cross platform plugin library. What this would be, I have no idea. Maybe you can offer some suggestions? I talked to the developer who originally wrote the plugin system, and he said some of the things it does aren't possible on modern OS's because of segmentation restrictions. I'm not sure what this means, but I'm guessing it means all the plugins will need to be rewritten almost from scratch.
Magically, I got all the above done, and we can try and make it run in windows, osx, and linux, whilst dealing with other cross platform niggles like long file names, and things I haven't thought of.
Anyone got a problem with any of this? Is allegro a good choice? if not, why? what would you do about this plugin system? What would you do different? Is this whole thing foolish, and should I just rewrite it from scratch, using the original as inpiration? (it would apparently take the original developer "About a month" to do that)
One thing I haven't covered above is the text/font system. Not sure what to do about that, but Animator Pro has its own custom font format, but also is able to use Postscript Type 1 fonts, and some other formats.
My biggest concern with your plan, in a nutshell: Your approach seems to be to attempt to keep the whole enormous thing working at all times, tweaking the environment ever-further away from DOS. During each tweak to the environment, that means you will have approximately a billion subtle assumptions that might have broken at once, none of which you necessarily understand yet. Untangling them all at once will be incredibly painful.
If I were doing the port, my approach would be to disable as much code as possible to get SOMETHING running in a modern environment, and bring the parts back online, one piece at a time. Write a simple test harness program that loads a display driver and draws some stuff, and compile it for DOS to make sure you understand the interface. Then write some C code that implements the same interface, but with Allegro (or SDL or SFML), and make that program work under Windows or Linux. When the output differs, you have a simple test case to work from.
Your entire job on this port is swapping out implementations of various interfaces and functions with completely new ones. This is a job that unit testing excels at. Don't write any new code without a test of some kind that runs on the old code under DOS! Make your potential problems as small and simple as you possibly can. Port assembly code instead of rewriting it only if you're reasonably confident that it will actually make your job easier (ie, algorithmic stuff that compiles fine with few tweaks under NASM). Don't bite off a bigger piece than you can comfortably fit in your brain at once.
I, for one, look forward to seeing your progress! I think what you're attempting to do is great. Thanks for doing it.
Hmmm - I might approach it by writing an OpenGL video "driver" for it. and todays machines are fast enough with tons of ram that you could do all the pixel specific algorithms on main CPU into a back buffer and it would work. As the "generic" VGA driver just mapped the video buffer to a pointer this would be a place to start. There was a zoom mode in the UI so you can look at the pixels on a high res display.
It is often very difficult to take an existing non-trivial code base that wasn't written with portability in mind - you mention a few - and then try to make it portable. There will be a lot of problems on the way. It is probably a better idea to start from scratch and rewrite the code using the existing code as reference only. If you start from scratch you can leverage existing portable UI solution in your new project like Qt.
I want to run a simple hello world, written in c, app.
on my at91sam9rl-ek.
Is it possible without an os?
And (if it is) how do I have to compile it?
-right now I try using g++ lite for creating arm code
(In general which programms can the board start without OS,
assembler, arm code?)
Sure, no problem running without an operating system, I do that kind of thing daily...
http://sam7stuff.blogspot.com/
You programs are, at least at first, not going to resemble desktop applications, I would avoid any libraries C libraries, no printfs or strcmps or things like that until you get the feel for it and find the right tools. No floating point as well. add some numbers do some shifting blink some leds.
codesourcery lite is probably the fastest way to get started, the gnueabi one I believe is the one you want.
This winarm site has a compiler and tons of non-os projects for seems like every arm based microcontroller.
http://www.siwawi.arubi.uni-kl.de/avr_projects/arm_projects/
Atmel is very very good about information, no doubt they have example programs you can try as well on the eval board.
emdebian is another cross compiler that is somewhat up to date and has binaries. building a gcc from scratch for cross compiling is not bad at all. The C library is another story though, and even the gcc library for that matter. I find it easier to do without either library.
It is possible get a C library working and run a great many kinds of programs. Depends on what you are looking to do. Ahh, just looked at the specs, that is a pretty serious eval board, plenty of power for an operating system should you choose to run one. You can certainly run programs that use the display as a user interface. read/write sd cards, usb, basically everything on the board, without an os, if you choose.
Have you ever obfuscated your code before? Are there ever legitimate reasons to do so?
I have obfuscated my JavaScript. It made it smaller, thus reducing download times. In addition, since the code is handed to the client, my company didn't want them to be able to read it.
Yes, to make it harder to reverse engineer.
To ensure a job for life, of course (kidding).
This is pretty hilarious and educational: How to Write Unmaintanable Code.
It's called "Job Security". This is also the reason to use Perl -- no need to do obfuscation as separate task, hence higher productivity, without loss of job security.
Call it "security through obsfuscability" if you will.
I don't believe making reverse engineering harder is a valid reason.
A good reason to obfuscate your code is to reduce the compiled footprint. For instance, J2ME appliactions need to be as small as possible. If you run you app through an obfuscator (and optimiser) then you can reduce the jar from a couple of Mb to a few hundred Kb.
The other point, nestled above, is that most obfuscators are also optimisers which can improve your application's performance.
Isn't this also used as security through obscurity? When your source code is publically available (javascript etc) you might want to at least it somewhat harder to understand what is actually occuring on the client side.
Security is always full of compromises. but i think that security by obscurity is one of the least effective methods.
I believe all TV cable boxes will have the java code obfuscated. This does make things harder to hack, and since the cable boxes will be in your home, they are theoretically hackable.
I'm not sure how much it will matter since the cable card will still control signal encryption and gets its authorization straight from the video source rather than the java code guide or java apps, but they are pretty dedicated to the concept.
By the way, it is not easy to trace exceptions thrown from an obfuscated stack! I actually memorized at one point that aH meant "Null Pointer Exception" for a particular build.
I remember creating a Windows Service for Online Backup application that was built in .NET. I could easily use either Visual Studio or tools like .NET Reflector to see the classes and the source code inside it.
I created a new Visual Studio Test application and added the Windows Service reference to it. Double clicked on the reference and I can see all the classes, namespaces everything (not the source code though). Anybody can figure out the internal working of your modules by looking at the class names. In my case, one such class was FTPHandler that clearly tells where the backups are going.
.NET Reflector goes beyond that by showing the actual code. It even has an option to Export the whole project so you get a VS project with all the classes and source code similar to what the developer had.
I think it makes sense to obfuscate, to make it atleast harder if not impossible for someone to disassemble. Also I think it makes sense for products involving large customer base where you do not want your competitors to know much about your products.
Looking at some of the code I wrote for my disk driver project makes me question what it means to be obfuscated.
((int8_t (*)( int32_t, void * )) hdd->_ctrl)( DISK_CMD_REQUEST, (void *) dr );
Or is that just system programming in C? Or should that line be written differently? Questions...
Yes and no, I haven't delivered apps with a tool that was easy decompilable.
I did run something like obfuscators for old Basic and UCSD Pascal interpreters, but that was for a different reason, optimizing run time.
If I am delivering Java Swing apps to clients, I always obfuscate the class files before distribution.
You can never be too careful - I once pointed a decent Java decompiler (I used the JD Java Decompiler - http://www.djjavadecompiler.com/ ) at my class files and was rewarded with an almost perfect reproduction of the original code. That was rather unnerving, so I started obfuscating my production code ever since. I use Klassmaster myself (http://www.zelix.com/klassmaster/)
I obfuscated code of my Android applications mostly. I used ProGuard tool to obfuscate the code.
When I worked on the C# project, our team used the ArmDot. It's licensing and obfuscation system.
Modern obfuscators are used not only to make hacking process difficult. They are able to protect programs and games from cheating, check licenses/keys and even optimize code.
But I don't think it is necessary to use obfuscator in every project.
It's most commonly done when you need to provide something in source (usually due to the environment it's being built in, such as systems without shared libraries, especially if you as the seller don't have the exact system being build for), but you don't want the person you're giving it to to be able to modify or extend it significantly (or at all).
This used to be far more common than today. It also led to the (defunct?) Obfuscated C Contest.
A legal (though arguably not "legitimate") use might be to release "source" for an app you're linking with GPL code in obfuscated fashion. It's source, it can be modified, it's just very hard. That would be a more extreme version of releasing it without comments, or releasing with all whitespace trimmed, or (and this would be pushing the legal grounds probably) releasing assembler source generated from C (and perhaps hand-tweaked so you can say it's not just intermediate code).