I'm working on a site that is using KaTeX for rendering math. However, the interface for entering the math content is (really) not ideal, so it is actually faster for me to work in an editor, like Sublime Text 3 and import the work; however, an issue I run into is that when I import, I discover various functions / environments aren't supported (i.e. emulated) by KaTeX.
If it were just me working on the material, I would simply learn as I go and consult the KaTeX documentation page; however, I have several contractors working on digitizing content who do not have access to the site (and I don't have the ability to give them access), and so cannot learn by trial-and-error. Instead, I end up with piles of documents that all need to be manually adjusted, to render as desired with KaTeX.
As such, I wanted to assemble a preamble for a LaTeX document that would recreate the abilities (i.e. functions and environments) KaTeX can emulate, and was wondering if such a preamble / package already exists? I have tried a few quick searches, but because I'm looking for something that imitates an emulator, I'm finding it tricky to find the right choice of words to get relevant results.
I wasn't sure if this were best posted here or on the TeX.se - I suspect it falls in-between the two - so I apologize if my guess was wrong and I should've tried there first. Any suggestions would be very much appreciated, as this is creating a substantial bottle-neck in my workflow but is also just outside of my ability to solve on my own.
Supported functions is one thing. To tackle that you might actually stand a fair chance of just tokenizing the input, looking for backslash name sequences and checking them against a list extracted from KaTeX sources to see which are supported.
I guess one could even try to remove all other functions from LaTeX. Or rather hide them, such that the user input can't access them but third party libraries can. Getting rid of language features (as opposed to macros) such as \def would probably be even harder. Better askn on the TeX stack exchange for details of you really want to follow this route.
As an alternative I guess you might be able to perform the check I described above in TeX. Write a macro which reads the current file as plain text instead of TeX source, to perform this analysis. Or some such. But a separate stand-alone tool would be much easier.
If you are going for a separate tool, you might as well write it in JavaScript for Node, and have it run KaTeX on the input. That way you can at least tell whether it will get typeset to something or error out.
Whether the rendering is what you expect from LaTeX may be another question. In general KaTeX aims to reproduce LaTeX behaviour, so any difference might indicate a bug. But bugs exist, so all of this might not avoid the need for checks. How about you just processing the math part of the input with KaTeX to some HTML which authors can check without access to the site?
As for existing tools or macro packages, I know of none, but tool or library questions are off topic on stack exchange anyway.
I have been using both in my projects and sometimes I find the need to use a Material UI component within a bootstrap component and the UI displays as I would expect. I have been advised though not to use this approach. Is there any reason why since both are using the grid and can be flexed?
I tend to be verbose so I'll put the concise answer up top here:
Conclusion:
Whoever said it was bad to use both might just be expressing their
opinion, in reality saying it's bad to use both really lacks context
in what you're designing. #user3770494 made a very good point- but the
point, while valid and truthy about the build size, it does depend on
the scope of the application. If it's an intraoffice application with
everyone on a fiber network local it'd all be cached in memory
anyways... but (not that you know me) I would not judge you negatively
if you mixed them together-- unless it was for a MAJOR million user
application to run on mobile (an very low devices), desktop, and other
devices around the world requiring real(ish) time updates, and
streaming dynamic content with 10000s of active users at any givin
time 24/7.
In all truth- if it's not life or death-- I'd say use both- and also
do your own. The experience of understanding more then one thing is
better then just "committing to a single solution" for personal
growth.
The rest of this reading is optional - you're welcome :)
I personally have used both in production applications (both together, and independently)... I've also done it all from scratch... (CSS is my least favorite part of job things I do - luckily I have a coworker who is great at it) Here are my thoughts:
Warning: I tend to be verbose.
Disclaimer:
As someone who likes function over form, form is an afterthought that is nit picky for clients to ask about tenuous little changes. I am going to try to leave my opinion on "how each option looks for feels" out of this as much as I can.
Also I'm looking at your question in a current choice I'm making- which is using ReactJS / create-react-app to make "demo" projects for touch screen embeaded systems- so I am going to roll out half a dozen mock programs for demos nothing that really does anytihng exactly (CCscanner, barcode scanner, gps, webcam integration, fun stuff like that). So I'm researching what will be easist for me to just commit to for this "beacuse I'm bored and got a pi3b+board for fun).
Answer:
If you have the time, dedication and resources, there is really
nothing wrong with mixing them together. But you just need to think
about the time/cost/benifit of it. DIY to make the end user happy-
even if you mix them. Totally yourself is remaking the wheel- but you
can always pull in boostrap styles etc.
The inherent risks is that if you use both- make sure you don't "Mix them" to much- because then you will always have issues with trying
to ever do version changes on either one.
I like a lot of MaterialUIs things, but I honestly dislike how a few things look (style wise by default)- functionally I like it better
then bootstrap, but at the same time, I do not like MaterialUIs React
programming style (as a purist who hates CSS But knows how important
it is- having to use !important ever ever ever ... is big big big
nonononononononon) compared to whatever way my coworkers and I use
for conventions. Not saying it's better or worse then my own
preference- but a few things about it really irk me (even if they are
done for good reasons).
Bootstrap has a lot of choices to use for- I like how it looks better, I like how it plays with ReactJS better- but there is
reactstrap vs. react-bootstrap (which is why I found your post trying
to figure which one to use for this demo thing I'm doing).
Most recently (for production projects) I do try to stick just with one but normally I'm making systems that are function over form. So
they don't really care that much about the UI elements, it's about
how to use it not make it pretty. So I stick with using a single one
just to make my job easier- and usally still override styles myself
... if the original styles piss me off. But I do not stick with one
because "It's bad form to use both" I just stick with one for that
reason mentioned above. I'd actually say if bandwidth is not an
issue- it's quality to use both- but only use the parts of them that
you actually use.
(I noticed someone once importing full jQuery when the only thing that they used from it was $.ajax (all be it a lot but still) ... I was like... is that not overkill?!) -- So if you use both and want to keep things slim- just make sure on compilation you're only importing what you're using. Pythonically I'm saying- never use import * from module (however you express that in Javascript as a concept - webpack/gulp/whomever should take care of most of that for you). Assuming you're using ES6/7 style Javascript.
I encountered this scenario recently and opted to use both but for specific tasks. With it being a responsive web app Bootstrap made the most sense to use for the layout and Material UI for the widgets (just my personal preference).
You can definitely use both but you should be aware of the following:
There will be a lot of overlap in offered features unless you take time and effort to manage it by carefully picking elements from each of those libraries. And even then you will face situations where you can't avoid overlap. This basically results in a bigger bundle.
You will have to maintain theming variables for both systems to have a consistent presentation across your app. Even then, there will be situations like where your table checkboxes look different from your form checkboxes because they are from different libraries.
You have to learn and understand both of the systems. It means sometimes it'll be harder to find what's causing a certain bug. You'll also be spending more time deciding which library to use for which component.
Overall, it's more work for you to work with two different systems and a higher chance of things looking inconsistent. That said, mixing in things like a grid system with limited theming might not be too bad.
If possible, I highly encourage you to choose one system and stick with it.
Using both will increase your production js size.
Material Ui and bootstrap both provide components with basic styles like buttons so choose one.
You can use bootsrap grid for structure only or even go with flex.
How can I store, for example, the body of a method in a database and later run it? (I'm using Delphi XE2; maybe RTTI would help.)
RTTI is not a full language interpreter. Delphi is a compiled language. You write it, compile it, and distribute only your binaries. Unless you're Embarcadero, you don't have rights to distribute DCC32 (the command line compiler).
However, the JVCL includes a delphi-like language subset wrapped up in a very easy to use Component, called "JvInterpreter". You could write some code (as pascal) and place it in a database. You could then "run that code" (interpreted, not compiled) that you pull from the database. Typically these should be procedures that call methods in your code. YOu have to write some "wrappers" that expose the compiled APIs that you wish to expose to the interpreter (provide access to live data, or database connection objects, or table/query objects). You're thinking that this sounds perfect right? Well, it's a trap.
Beware of something called "the configuration complexity clock". YOu've just reached 9 o'clock, and that's where a lot of pain and suffering begins. Just like when you have a problem, and you solve it with regular expressions, and "now you have two problems", adding scripting and DSLs to your app has a way of solving one problem and creating several others.
While I think the "DLL stored in a database blob field" idea is evil, and absurd, I think that wanton addition of scripting and domain-specific languages to applications is also asking for a lot of pain. Ask yourself first if some other simpler solution could work. Then apply the YAGNI principle (You Ain't Gonna Need It) and KISS (keep-it-simple-smartguy).
Think twice before you implement anything like what you're asking about doing in your question.
Your best Option, IMHO, is using a scripting engine and storing scripts in the database.
Alternatively you could put the code in a dll and put that dll in the database. There is code for loading a dll from a resource into ram and processing it so it can be used as if it was loaded using LoadLibrary, e.g. in dzlib. I don't really know whether works with any dll and in which versions of Windows, but it does with the ones I tried.
I'm totally new to the world of programming and understand very little in terms of jargon and typical methodology.
A while ago I was writing some code, but accidentally deleted some good code while I was deleting bad code. From then on I started creating versions of my files, I would name each file with the date and a version number.
However, this is a pain in the ass, having to give an unique name to each file and then going to my core file and changing the reference to the name of the new file.
And then, just the other day I accidentally over wrote something important even with this method, probably because of a typo in naming.
Needless to say, this method sucks.
I'm looking for suggestions on better practices, better tools. I've been looking at version control, but a lot of them, git svn look really complicated. The idea is to speed up the whole versioning process, not make it harder by having to do command line.
Right now I'm hoping that there's a tool that would save an unique version of the file every time I hit ctrl-s, and give me one button to create a finalized version.
Of course if there are suggestions for totally different ways of doing things, that would be more awesome.
Thanks everyone.
There are two approaches to this problem:
Versioning on demand. This is the model used by subversion, CVS, etc., etc. When you have made a 'significant' change, you decide to tell the system "keep this version".
Automatic versioning. This is the model used by some old VAXen, Eclipse, IDEA, every wiki ever, and a few writer's tools. Every time you save, a new version is implicitly created. At some remove, old versions may be culled (e.g., only one version is kept from work performed a week ago, rather than every save).
It sounds like you would prefer #2, because it is "fool-proof" -- you never have to go, "oops, I should have 'checked in' / 'kept' my work before making this change." You can always roll back. One downside is that you have to manually step through the old versions to find something, because unlike with #1 you generally are not giving a description of each change.
Another downside is that for large files, or ones that are not easily diff'd/patched (i.e. binary files), you will start burning through disk space pretty fast..
As an aside, it sounds like you don't need 90% of the features in a standard SCM system -- branching, labeling, etc. -- but you might find uses for them eventually. So learning one may be a win in the long run. You can do this with svn, etc. but it will take some customizing. If you use a scriptable editor (emacs, vi, TextMate, whatever) you could redefine the "Save" command as "Save and make a new version".
Subversion is more or less the gold standard.
I'd suggest (especially for a newbie) that you check out BeanStalk (www.Beanstalkapp.com) to run your subversion server and TortoiseSVN for your client.
Good luck!
Whatever you do, if someone mentions Visual SourceSafe -- run as fast as you can. VSS was created by Satan himself and handed down to torment developers the world over.
I think you're in a position where you have to get a little bit out of your comfort zone and take some time to learn git. It's pretty easy to learn and use.
Believe me, it's really worth it. Time spent learning git is time well spent.
If you are not working in a team, you could use something like Eclipse's local history feature. It stores versions of your files locally, and you can revert to previous versions whenever you feel like it. More details here: http://help.eclipse.org/ganymede/index.jsp (Search for "local history"). I am pretty sure other IDEs have such a feature too.
If you are collaborating with others on your code, there probably is no way around learning one of the standard tools like SVN, CVS or git. For most of them, there are plugins for many IDEs available, so you don't have to use the command line.
I currently use Subversion, but my source control experience is limited.
I would however suggest reading the tutorial by Eric Sink.
http://www.ericsink.com/scm/source_control.html
Its best to learn how to use an existing 'industry standard' versioning tool like Subversion. Even if you're new to programming and version control, SVN isn't that hard to learn and will serve you well. I personally use and recommend VisualSVN Server and TortoiseSVN for Windows. Both are free and quite simple to use.
For a system that creates a revision on every save, perhaps you should look into a Versioning File System.
I think TortoiseSVN would be a good Subversion client for you to try if you're in Windows. It won't do what you're looking for with every-time-I-save-I-get-a-new-version--you'll have to manually "commit" versions to the repository. When you do a commit, that creates a new version, essentially saving your progress at that point. TortoiseSVN is pretty user-friendly, and it's a GUI, so you won't be working at the command line. You'll be able to do things like right-click a file in Windows Explorer and choose Commit to save your progress. Plus, TortoiseSVN is free and open source.
Subversion is not really complicated. If you are using Windows, TortiseSVN will help a lot, if you are using Eclipse, subclipse plug-in is awesome. (You probably should be using eclipse regardless :) )
Some of the others are a bit complicated, but you just have to know the pattern with eclipse. Maybe you could "Try it out" with an open source project or some existing subversion server.
The cycle would be:
First you "Check out" a repository. This fills up your specified directory with the contents from the repository.
If you are doing it from the command line--it's "svn co"--there is enough help there to figure out the rest.
Second you edit your files. You don't have to lock them or anything.
if you add a new file, you use "svn add filename" as soon as you add it. This won't actually change the repository until you commit your changes.
When a group of edits are done, you check them in with "svn ci" (also svn commit works).
This one has a SLIGHT twist that you'll always forget--every commit needs a comment. You don't have to specify the files you are committing or anything, but you do need to be in the top level of your project (it will commit everything below your directory.
So the procedure here is, go to the "root" of your project tree and type:
svn ci -m "comment"
piece of cake.
Finally, IF someone else is checking stuff in things get SLIGHTLY stranger. before you commit, you should "update" and get their changes. "svn up" is all it takes, but it may warn you that there were merges. This only happens when both of you edited the same file, and 90% of the time, the merges will go okay. the rest of the time, it will put little markers in your file telling you what you changed and what they changed. The "up" command will tell you which files it did this to. Go look at them and clean the file up before you check the file in.
Always test between "svn up" and "svn ci", you never know if their crappy changes busted your pristine code.
That's really it. It's so easy from the CLI, that the graphics environments are hardly worth it (but subclipse is really nice if you are in eclipse anyway because it will visually show you modified files that need to be checked in).
If you ever forget, svn's command line help is extremely terse and useful, tells you JUST what you need to know, and has help on all the sub-commands and options.
If you're looking for an easy-to-set-up version control system for Windows, I highly recommend TortoiseHg, an easy-to-use Mercurial frontend for Windows. You don't have to worry about setting up and keeping track of a repository separate from your files, but you always can do so if you'd like to. Mercurial is a great tool because it can grow with your needs. It has all the usual features like easy merging, etc. and is quite a bit easier to wrap your head around than Git in my experience.
I think Git is really easy to use especially when you use GitHub. They also provide lots of good guides to get up and running.
http://github.com
http://github.com/guides/home
I've used Git, SVN, CVS, and Perforce. On both Windows and Unix environments.
My vote is definitely for SVN, as it's ease of use, and flexibility. I prever to use command-line now, but at one time I was using TortoiseSVN for Windows, which we were able to get non-technical people to use without a hitch.
Use SVN.
You're definitely on the right track with recognizing the need for version control, but sound unsure what that might mean to you and your work. Once you learn the concepts behind version control systems, you will really come to appreciate them.
The concepts are simple: a source code control system is a piece of software designed to help you store and manage your code. How you get code into and out of it differ based on which system you choose: one paradigm is that you deliberately "check out" a file, make your changes to it, test it and make sure it's good, then check it back in. Another is that you simply save every change you make because disk space is dirt cheap, much cheaper than your time and effort spent to create the source in the first place.
Another important concept is the "baseline" or "label". When your product is in a ready-to-ship state, you tell the source code control system to create a "label" and tag every current item your entire source code base with that label. That way, when someone reports a bug in version 4.1 you can go to your system, request all the files with the "Version 4.1" label and get exactly the source code they're having a problem with.
Having a source control tool integrated with your development environment makes the whole process much easier than having to mess with command lines. (Don't discount command line because of their complexity, they deliver elegant control to an experienced user, and you eventually will become an experienced user.) But for now, I'd recommend a source code tool that can automate the process as much as possible.
Some things to consider: are you now, or are you planning to share the development with another developer? That might make a difference on how you want to set up a server. If you're developing alone on your own box, you can set it all up locally, but that's probably not the best approach for a team. (If you're unsure, git is very flexible in that arena.) Are you going to be storing large multimedia files, or just source code? Some source code systems are designed to efficiently store only text files, and will not handle movies, sounds or image files very well.
Something else to know is that most newer source control systems require some kind of "daemon" program running on the server (Subversion, git, Perforce, Microsoft Team Foundation Server) while the older, simpler systems just use the file system directly (Visual Source Safe, cvs) and don't require a server program.
If you don't want to learn much and your demands are low, the simpler solutions should suffice. Microsoft's Visual Source Safe used to come with their visual studio products, and was a very simple to use tool. It's not very robust, it's Microsoft-only, and it can't handle large files well, but it's very, very easy to set up and use. If you don't want to spend money, Subversion and git are two stellar open source solutions, and there is a lot of documentation for both on the web.
If you like to spend money, Perforce is considered an excellent choice for professional development teams (and I believe they have a free single-developer version.) If you really like to spend lots of money and want to make Bill Gates happy, Microsoft's Team Foundation Server is a complete software development lifecycle manager, is extremely easy to use in the Windows environment, and very powerful; but you'd probably want to devote an entire Windows server (plus SQL Server) instance to host it, and it will cost you several thousand dollars just on licenses. Unfortunately it is not the right tool for a one-man shop, or if you have no Windows admin experience.
If you have the budget or the connections, bringing in an experienced software engineer to help you get things started might be the quickest path to success. Otherwise, you'll have to do some more research to learn which systems best fit your situation.
Whatever VCS you use, if you choose versioning on demand instead of automatic versioning (to borrow terms from Alex's post), you will have to go through some ceremony to:
-create,
-rename,
-move,
-copy, or
-delete
a file that is under source control.
When you create a new file, you have to Add it to source control before you Commit your changes to the repository.
When you rename, move, copy, or delete a file under source control, do so with your VCS client. In TortoiseSVN and TortoiseGit, the move and copy operations are done with a right-click-and-drag, whereas the rename and delete operations are available via a right-click.
As you can imagine, changing things like the name of a project can be quite the hassle, hence the case for automatic versioning.
Ordinary file edits and any changes to files not under source control, do not require you to tell your VCS client about them.
Finally, for one-man projects, I prefer git over SVN because SVN requires at least 2 copies of everything: a repository (the "master" copy of the files and history) and a working copy (the copy you do your work on). With git, the repository and working copy are the same thing, which makes my experience simpler.
We use SourceGear Vault, which has great integration with Visual Studio, and is free for a single user. Depending on what framework and languages you're using, though, Subversion is a great free solution.
First of all, please read these articles by Eric Sink. Eric sink runs a company that creates a Source Control system called Vault. He explains in a newbie friendly manner how to do source control, best practices etc:
Introduction to Source Control
I found it invaluable when I first wanted to understand Source Control.
SourceGear Vault is FREE for a single user. It's interface is intuitive and integrates well with Visual Studio.
If it's just you, you might want to try Bazaar. It's distributed like Git (so it'll be nice for a single person--no server to deal with), but one of their main goals was to make it it much easier to use than Git.
Also, there is a handy gui tool that should make it amazingly easy to use called ToroiseBzr. http://bazaar-vcs.org/TortoiseBzr
There is in fact such a tool. It is called emacs.
Just create yourself a "~/.emacs" file and put the following lines in it:
(setq kept-new-versions 5)
(setq kept-old-versions 5)
And then restart emacs.
This tells emacs to save your 5 oldest and 5 newest versions of that file. They will be kept in files named filename~n~ where "filename" is your file's normal name, and "n" is the backup number.
If you develop your project alone (don't need ane server for collaboration) Mercurial might be you system of choice. I personally value one of its features: it only uses one place to save its information, it is the .hg directory in the root of your project. It doesn't put its data into every directory (like SVN). This way the archive and the project directory is easy to manage.
I've used Visual Source Safe, Perforce, and Subversion. They were all fine, but I would have to say that the support and extensions for Subversion just seemed slightly better. If you're planning on entering/staying in the software industry, you MUST know the fundamentals to source control, and I would highly recommend setting up one of the source control services. Subversion would be my recommendation and is free as well. It will be complicated at first, but you really should use a SVN client to add a GUI to increase utility and cut down on all the complication you're observing.
I quick google of "dreamweaver svn" reveals that many people are working with Subversion in Dreamweaver. I'm a advocate of version control, and SVN in particular, so I would recommend you look into that :)
If you don't want to use a full on version control system (as noted above), you may be able to improve your lot by refining and automating the procedure you described originally. Depending on your comfort with the tools you should be able to put together a script in DreamWeaver itself or in Windows Scripting ( Powershell, VBA, Perl, etc ) that will at least make date-named copies of the folder you are working in every so often. This will keep you from having to do it and make sure there aren't any typo-related problems. Further down that path you can have your script put a copy of your work on a backup drive or remote server, and then you'd have a back up, too.
I'm afraid I don't know much about DreamWeaver, but if it has much scripting support built-in you may even be able to "hook" into the Save/ Auto-Save functions and have them do exactly what you want.
Hope this helps,
adricnet