TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues:
It travels on my USB flash memory stick and runs on any computer, regardless of operating system
No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser)
No Internet connection is needed
In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links)
Set up and configuration are so simple as to be almost zero. This would also mean dependencies are so minimal as to be transparent, or nearly so.
Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow.
Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? (or rather, retrieval!)
NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.
After some time and serious consideration, I will post my own answer.
There is nothing that matches TiddlyWiki.
As for voluminous information, TW can pretty much handle it. (My early discouragements were due to malformed code.) Difficulty accessing information through the interface becomes an issue before any speed problems. This isn't to fault the interface -- it could be more powerful, but that would sacrifice lightness.
Indeed TiddlyWiki can work with VERY large tiddler stores, they don't need to be in the current TiddlyWiki document either.
See "import tiddler" and friends over at http://tiddlytools.com
Before creating Rails, David Heinemeier Hansson wrote a wiki app called Instiki. Like TiddlyWiki, you don't run it from a separately running server*, so it's easy to run locally and move around on a USB drive (exporting the entire content to a zip file with all the html files or all the files in Textile markup). The entire Instiki tgz download is less than 5mb and the app has only one external dependency: Ruby.
So you can run Instiki anywhere you can run Ruby (for instance, on a Nokia N900 phone).
I never built any Instiki sites as large as you describe, but it ought to handle 1 million words in 4,000 pages a lot easier than TiddlyWiki handles 4,000 tiddlers.
Roger_S
* Oh, not to confuse anyone: Instiki uses the embedded webserver WEBrick
You could try installing Portable Apps on your USB drive and adding the XAMPP Package which has Apache, PHP, MySQL all installed and running MediaWiki or other Wiki software on top of it.
http://tiddlyweb.peermore.com/wiki/ maybe exactly what you are looking for.
You can use any TiddlyWiki variant and the data can be delivered via a server and on-demand.
I have recently discovered DokuWikiStick which runs a version of MicroApache. Recommended by LifeHacker... Starting size is about 10MB.
you probably already know this but there's a new version of tiddlywiki out that is still in beta but has been rewritten to allow a more robust environment for the future.
http://tiddlywiki.com/
2020 answer, from 2017
Check out liddly, it's a local tiddlywiki server written in go that fits all your requirements and can run off a USB. It stores tiddlers in a SQLite database, albeit without relational links, making the tiddlywiki interface (presentation) separate from your data(content). It was last updated in 2017 but it still works with the latest tiddlywiki5, you will just have to compile it yourself.
We are building a webapp which is shipped to several client as a debian package. Each client runs his own server. But the update and support is done by us.
We make regular releases of the product, with a clean version number. Most of the users get an automatic update (by Puppet), some others don't.
We want to keep a trace of the version of the application (in order to allow the user to check the version in an "about" section, and for our support to help the user more accurately).
We plan to store the version of the code and the version of the base in our database, and to keep the info up to date automatically.
Is that a good idea ?
The other alternative we see is a file.
EDIT : The code and database schema are updated together. ( if we update to version x.y.z , both code and database go to x.y.z )
Using a table to track every change to a schema as described in this post is a good practice that I'd definitely suggest to follow.
For the application, if it is shipped independently of the database (which is not clear to me), I'd embed a file in the package (and thus not use the database to store the version of the web application).
If not and thus if both the application and the database versions are maintained in sync, then I'd just use the information stored in the database.
As a general rule, I would have both, DB version and application version. The problem here is how "private" is the database. If the database is "private" to the application, and user never modifies the schema then your initial solution is fine. In my experience, databases which accumulate several years of data stop being private, it means that users add a table or two and access data using some reporting tool; from that point on the database is not exclusively used by the application any more.
UPDATE
One more thing to consider is users (application) not being able to connect to the DB and calling for support. For this case it would be better to have version, etc.. stored on file system.
Assuming there are no compelling reasons to go with one approach or the other, I think I'd go with keeping them in the database.
I'd put them in both places. Then when running your about function you quickly check that they are both the same, and if they aren't you can display extra information about the version mismatch. If they're the same then you will only need to display one of them.
I've generally found users can do "clever" things like revert databases back to old versions by manually copying directories around "because they can" so defensively dealing with it is always a good idea.
I created a database for tracking metrics, with some automation tricks (email, .doc,.ppt presentations, etc) with a very large Main-table, and lots of forms/GUI. This is the first time I have ever I worried about an MDE/front-end for the thing. So if you would be so kind to answer a few questions, or offer any advice, it would be greatly appreciated (I would hate for all this work to not be utilized).
What is the first thing I need to do? It the 2000 version that must be converted to 03 to create the MDE, but does that get done before I use the database splitter?
Will the amount of objects in the database effect the ability to do this? I have something like 80 forms, 70 queries, 20+ macros, 12 tables, etc...but does the amount of objects prevent some of this from working well once the front end is there?
when i split the database, can I continue to work/make changes and such on the "back end", and have those changes directly effect the front end?
These may be some basic questions, but I don't know the answer so.....Thanks!
Here is my 2 ยข.
Question 1 - I have never used the database splitter as I feel I have more control doing it manually. If you do it manually you can do it to a version that does not have a database splitter. But if you do use the splitter then--yes--you will have to upgrade to a version that has a splitter before doing it.
To do it manually here are the steps.
Backup everything.
Create a copy of your file into the same directory. So if you have an MyApp.MDB create a copy into the same directory with a new name, such as MyAppDATA.mdb.
Open the new DATA file (MyAppDATA.mdb) and delete all of the objects EXCEPT the TABLES.
Open the App file (MyApp.mdb) and delete all of the tables.
Also in MyApp.mdb...go to the File/Get External Data/Link Tables menu to link the tables in MyAppDATA.mdb to MyApp.mdb. Select All and create the links.
That should do it. And if you screw up you made a backup...right?
A couple of tips and gotchas...be sure that you go to Tools/Options and that you are NOT showing System and Hidden tables. You just don't want to delete system tables from MyApp. Another way to do it is do NOT delete tables that start with MSys or USys.
Question 2 - Does not matter how many object you have. In fact you don't have that many objects anyway.
Question 3 - Yes...you will make backend changes in MyAppData.mdb and when you open MyApp.mdb those changes will auto-magically be there to see and query against etc. (In the query designer you may need to save/close/reopen to see new fields if you made the mod while in the query). The EXCEPTION to that is New Tables You will have to use the File/Get External Data/Link Tables option to create links to new tables.
One thing to remember (and that I hope you already realize) is that the one downside of splitting the database is that when you deploy the front end file that usually the relative path to the data will vary from machine to machine and there is no automatic re-linking of tables in access. If your target clients have full access you can always use Tools/Database Utilities/Linked Table Manager to refresh the links to the right location. If you can't do that then you will have to do one of the following:
1. Write code that does the automatic re-linking for you. Basically it will check the links...if invalid it will prompt the user for the data location (or look it up in an INI file) and re-link the tables.
2. Always deploy your app to the same location on all machines. If you have commercial visions for your application this won't work...I mention it for academic reasons. It might be doable for a limited deployment where you have a lot of control over file placement on each machine.
3. Put the Data file (MyAppDATA.mdb) onto a network share and link the table across the network using a drive mapping or UNC (\myserver\mydata\ApplicationData\MyAppData.mdb). The latter is preferred but both of them run the same risks as number two.
Seth
PS This answer assumes Access 2003.
PPS If you have commercial visions for your application then the table linking has got to be REALLY robust.
PPPS I agree with the commenter that you may want to take the plunge and do SQL if it is in your skill set.
One thing that hasn't been discussed, and that's the issue of whether the compile to MDE could fail. Basically, if your code compiles in your front-end MDB, it will convert to an MDE. But I've noticed that lots of people never compile.
Some hints for keeping your VBA code in good shape:
in VBE options, turn off COMPILE ON DEMAND.
add the COMPILE button to your standard VBE toolbar and USE IT OFTEN.
periodically, backup your MDB and decompile/recompile it.
Also, remember that you must keep the MDB source, as the VBA code is not editable in an MDE and not recoverable by any good method.
EDIT:
Steps for a decompile:
backup your MDB.
start an instance of Access with the /decompile commandline argument. For, instance, I have a shortcut on my deskstop that has this as the target:
"C:\Program Files\Microsoft Office\OFFICE11\MSACCESS.EXE" /decompile
having opened that instance of Access, open the MDB you want to decompile. You will see nothing happen. DO NOTHING FURTHER IN THIS INSTANCE OF ACCESS -- close this instance of Access (the reason for this is that Michael Kaplan, who knows a thing or two about this, recommended that you never do any work in an Access instance opened with the decompile switch because he said there was no guarantee that the Access application code executed under those circumstances in a way that was fully safe for all kinds of Access work).
open the just-decompiled MDB holding down the shift key (you want to be sure that startup routines don't run because that would likely recompile the product before you've finished your cleanup) and compact the MDB (holding down the shift key again).
open the code editor and compile the project (DEBUG -> COMPILE [db name] for those who haven't step #2 in my original compiling instructions at the top of the post before the edit).
compact the MDB (doesn't matter if you bypass startup, since it's already fully compiled).
Why so many steps?
Because the purpose of the decompile is to get rid of the compiled p-code in order to start afresh from the canonical VBA code. Following the steps above insures that you have completely cleared the data pages storing the compiled code before you recompile. The reason for this is that without the compact step after the decompile, under some very rare circumstances, the code can behave strangely. I can't imagine that the old discarded p-code is being used again, but there's something about the pointers between the canonical code and the compiled code that apparently doesn't get completely flushed by a decompile without a compact.
This would be a comment to Seth's answer, but my rep isn't high enough to comment yet.
Seth did a great job answering your questions, I just wanted to add a bit more to part #1 about using the Database Splitter. The Database Splitter in the Tools menu works fine. Doing it manually is alright too, but it's a whole lot faster and easier to use the Database Splitter. I've used it a dozen times and never encountered any issues after using it.
http://www.databasedev.co.uk/split_a_database.html has a decent page about some of the pros, cons of splitting your database.
http://www.accessmvp.com/TWickerath/articles/multiuser.htm also has some good info when dealing with a split database in a multi-user environment.
Seth gave you a very good answer. But I'll add a few comments.
The number of objects only becomes relevant when you get close to about 1000 forms, reports and modules which have code. There's a limit about there. If you do get that message when trying to make an MDE then you almost certainly have a code error and need to compile to find the error
Another resource is "Splitting your app into a front end and back end Tips"
See the Auto FE Updater downloads page to make the process of distributing new FEs relatively painless.. The utility also supports Terminal Server/Citrix quite nicely.
I'm totally new to the world of programming and understand very little in terms of jargon and typical methodology.
A while ago I was writing some code, but accidentally deleted some good code while I was deleting bad code. From then on I started creating versions of my files, I would name each file with the date and a version number.
However, this is a pain in the ass, having to give an unique name to each file and then going to my core file and changing the reference to the name of the new file.
And then, just the other day I accidentally over wrote something important even with this method, probably because of a typo in naming.
Needless to say, this method sucks.
I'm looking for suggestions on better practices, better tools. I've been looking at version control, but a lot of them, git svn look really complicated. The idea is to speed up the whole versioning process, not make it harder by having to do command line.
Right now I'm hoping that there's a tool that would save an unique version of the file every time I hit ctrl-s, and give me one button to create a finalized version.
Of course if there are suggestions for totally different ways of doing things, that would be more awesome.
Thanks everyone.
There are two approaches to this problem:
Versioning on demand. This is the model used by subversion, CVS, etc., etc. When you have made a 'significant' change, you decide to tell the system "keep this version".
Automatic versioning. This is the model used by some old VAXen, Eclipse, IDEA, every wiki ever, and a few writer's tools. Every time you save, a new version is implicitly created. At some remove, old versions may be culled (e.g., only one version is kept from work performed a week ago, rather than every save).
It sounds like you would prefer #2, because it is "fool-proof" -- you never have to go, "oops, I should have 'checked in' / 'kept' my work before making this change." You can always roll back. One downside is that you have to manually step through the old versions to find something, because unlike with #1 you generally are not giving a description of each change.
Another downside is that for large files, or ones that are not easily diff'd/patched (i.e. binary files), you will start burning through disk space pretty fast..
As an aside, it sounds like you don't need 90% of the features in a standard SCM system -- branching, labeling, etc. -- but you might find uses for them eventually. So learning one may be a win in the long run. You can do this with svn, etc. but it will take some customizing. If you use a scriptable editor (emacs, vi, TextMate, whatever) you could redefine the "Save" command as "Save and make a new version".
Subversion is more or less the gold standard.
I'd suggest (especially for a newbie) that you check out BeanStalk (www.Beanstalkapp.com) to run your subversion server and TortoiseSVN for your client.
Good luck!
Whatever you do, if someone mentions Visual SourceSafe -- run as fast as you can. VSS was created by Satan himself and handed down to torment developers the world over.
I think you're in a position where you have to get a little bit out of your comfort zone and take some time to learn git. It's pretty easy to learn and use.
Believe me, it's really worth it. Time spent learning git is time well spent.
If you are not working in a team, you could use something like Eclipse's local history feature. It stores versions of your files locally, and you can revert to previous versions whenever you feel like it. More details here: http://help.eclipse.org/ganymede/index.jsp (Search for "local history"). I am pretty sure other IDEs have such a feature too.
If you are collaborating with others on your code, there probably is no way around learning one of the standard tools like SVN, CVS or git. For most of them, there are plugins for many IDEs available, so you don't have to use the command line.
I currently use Subversion, but my source control experience is limited.
I would however suggest reading the tutorial by Eric Sink.
http://www.ericsink.com/scm/source_control.html
Its best to learn how to use an existing 'industry standard' versioning tool like Subversion. Even if you're new to programming and version control, SVN isn't that hard to learn and will serve you well. I personally use and recommend VisualSVN Server and TortoiseSVN for Windows. Both are free and quite simple to use.
For a system that creates a revision on every save, perhaps you should look into a Versioning File System.
I think TortoiseSVN would be a good Subversion client for you to try if you're in Windows. It won't do what you're looking for with every-time-I-save-I-get-a-new-version--you'll have to manually "commit" versions to the repository. When you do a commit, that creates a new version, essentially saving your progress at that point. TortoiseSVN is pretty user-friendly, and it's a GUI, so you won't be working at the command line. You'll be able to do things like right-click a file in Windows Explorer and choose Commit to save your progress. Plus, TortoiseSVN is free and open source.
Subversion is not really complicated. If you are using Windows, TortiseSVN will help a lot, if you are using Eclipse, subclipse plug-in is awesome. (You probably should be using eclipse regardless :) )
Some of the others are a bit complicated, but you just have to know the pattern with eclipse. Maybe you could "Try it out" with an open source project or some existing subversion server.
The cycle would be:
First you "Check out" a repository. This fills up your specified directory with the contents from the repository.
If you are doing it from the command line--it's "svn co"--there is enough help there to figure out the rest.
Second you edit your files. You don't have to lock them or anything.
if you add a new file, you use "svn add filename" as soon as you add it. This won't actually change the repository until you commit your changes.
When a group of edits are done, you check them in with "svn ci" (also svn commit works).
This one has a SLIGHT twist that you'll always forget--every commit needs a comment. You don't have to specify the files you are committing or anything, but you do need to be in the top level of your project (it will commit everything below your directory.
So the procedure here is, go to the "root" of your project tree and type:
svn ci -m "comment"
piece of cake.
Finally, IF someone else is checking stuff in things get SLIGHTLY stranger. before you commit, you should "update" and get their changes. "svn up" is all it takes, but it may warn you that there were merges. This only happens when both of you edited the same file, and 90% of the time, the merges will go okay. the rest of the time, it will put little markers in your file telling you what you changed and what they changed. The "up" command will tell you which files it did this to. Go look at them and clean the file up before you check the file in.
Always test between "svn up" and "svn ci", you never know if their crappy changes busted your pristine code.
That's really it. It's so easy from the CLI, that the graphics environments are hardly worth it (but subclipse is really nice if you are in eclipse anyway because it will visually show you modified files that need to be checked in).
If you ever forget, svn's command line help is extremely terse and useful, tells you JUST what you need to know, and has help on all the sub-commands and options.
If you're looking for an easy-to-set-up version control system for Windows, I highly recommend TortoiseHg, an easy-to-use Mercurial frontend for Windows. You don't have to worry about setting up and keeping track of a repository separate from your files, but you always can do so if you'd like to. Mercurial is a great tool because it can grow with your needs. It has all the usual features like easy merging, etc. and is quite a bit easier to wrap your head around than Git in my experience.
I think Git is really easy to use especially when you use GitHub. They also provide lots of good guides to get up and running.
http://github.com
http://github.com/guides/home
I've used Git, SVN, CVS, and Perforce. On both Windows and Unix environments.
My vote is definitely for SVN, as it's ease of use, and flexibility. I prever to use command-line now, but at one time I was using TortoiseSVN for Windows, which we were able to get non-technical people to use without a hitch.
Use SVN.
You're definitely on the right track with recognizing the need for version control, but sound unsure what that might mean to you and your work. Once you learn the concepts behind version control systems, you will really come to appreciate them.
The concepts are simple: a source code control system is a piece of software designed to help you store and manage your code. How you get code into and out of it differ based on which system you choose: one paradigm is that you deliberately "check out" a file, make your changes to it, test it and make sure it's good, then check it back in. Another is that you simply save every change you make because disk space is dirt cheap, much cheaper than your time and effort spent to create the source in the first place.
Another important concept is the "baseline" or "label". When your product is in a ready-to-ship state, you tell the source code control system to create a "label" and tag every current item your entire source code base with that label. That way, when someone reports a bug in version 4.1 you can go to your system, request all the files with the "Version 4.1" label and get exactly the source code they're having a problem with.
Having a source control tool integrated with your development environment makes the whole process much easier than having to mess with command lines. (Don't discount command line because of their complexity, they deliver elegant control to an experienced user, and you eventually will become an experienced user.) But for now, I'd recommend a source code tool that can automate the process as much as possible.
Some things to consider: are you now, or are you planning to share the development with another developer? That might make a difference on how you want to set up a server. If you're developing alone on your own box, you can set it all up locally, but that's probably not the best approach for a team. (If you're unsure, git is very flexible in that arena.) Are you going to be storing large multimedia files, or just source code? Some source code systems are designed to efficiently store only text files, and will not handle movies, sounds or image files very well.
Something else to know is that most newer source control systems require some kind of "daemon" program running on the server (Subversion, git, Perforce, Microsoft Team Foundation Server) while the older, simpler systems just use the file system directly (Visual Source Safe, cvs) and don't require a server program.
If you don't want to learn much and your demands are low, the simpler solutions should suffice. Microsoft's Visual Source Safe used to come with their visual studio products, and was a very simple to use tool. It's not very robust, it's Microsoft-only, and it can't handle large files well, but it's very, very easy to set up and use. If you don't want to spend money, Subversion and git are two stellar open source solutions, and there is a lot of documentation for both on the web.
If you like to spend money, Perforce is considered an excellent choice for professional development teams (and I believe they have a free single-developer version.) If you really like to spend lots of money and want to make Bill Gates happy, Microsoft's Team Foundation Server is a complete software development lifecycle manager, is extremely easy to use in the Windows environment, and very powerful; but you'd probably want to devote an entire Windows server (plus SQL Server) instance to host it, and it will cost you several thousand dollars just on licenses. Unfortunately it is not the right tool for a one-man shop, or if you have no Windows admin experience.
If you have the budget or the connections, bringing in an experienced software engineer to help you get things started might be the quickest path to success. Otherwise, you'll have to do some more research to learn which systems best fit your situation.
Whatever VCS you use, if you choose versioning on demand instead of automatic versioning (to borrow terms from Alex's post), you will have to go through some ceremony to:
-create,
-rename,
-move,
-copy, or
-delete
a file that is under source control.
When you create a new file, you have to Add it to source control before you Commit your changes to the repository.
When you rename, move, copy, or delete a file under source control, do so with your VCS client. In TortoiseSVN and TortoiseGit, the move and copy operations are done with a right-click-and-drag, whereas the rename and delete operations are available via a right-click.
As you can imagine, changing things like the name of a project can be quite the hassle, hence the case for automatic versioning.
Ordinary file edits and any changes to files not under source control, do not require you to tell your VCS client about them.
Finally, for one-man projects, I prefer git over SVN because SVN requires at least 2 copies of everything: a repository (the "master" copy of the files and history) and a working copy (the copy you do your work on). With git, the repository and working copy are the same thing, which makes my experience simpler.
We use SourceGear Vault, which has great integration with Visual Studio, and is free for a single user. Depending on what framework and languages you're using, though, Subversion is a great free solution.
First of all, please read these articles by Eric Sink. Eric sink runs a company that creates a Source Control system called Vault. He explains in a newbie friendly manner how to do source control, best practices etc:
Introduction to Source Control
I found it invaluable when I first wanted to understand Source Control.
SourceGear Vault is FREE for a single user. It's interface is intuitive and integrates well with Visual Studio.
If it's just you, you might want to try Bazaar. It's distributed like Git (so it'll be nice for a single person--no server to deal with), but one of their main goals was to make it it much easier to use than Git.
Also, there is a handy gui tool that should make it amazingly easy to use called ToroiseBzr. http://bazaar-vcs.org/TortoiseBzr
There is in fact such a tool. It is called emacs.
Just create yourself a "~/.emacs" file and put the following lines in it:
(setq kept-new-versions 5)
(setq kept-old-versions 5)
And then restart emacs.
This tells emacs to save your 5 oldest and 5 newest versions of that file. They will be kept in files named filename~n~ where "filename" is your file's normal name, and "n" is the backup number.
If you develop your project alone (don't need ane server for collaboration) Mercurial might be you system of choice. I personally value one of its features: it only uses one place to save its information, it is the .hg directory in the root of your project. It doesn't put its data into every directory (like SVN). This way the archive and the project directory is easy to manage.
I've used Visual Source Safe, Perforce, and Subversion. They were all fine, but I would have to say that the support and extensions for Subversion just seemed slightly better. If you're planning on entering/staying in the software industry, you MUST know the fundamentals to source control, and I would highly recommend setting up one of the source control services. Subversion would be my recommendation and is free as well. It will be complicated at first, but you really should use a SVN client to add a GUI to increase utility and cut down on all the complication you're observing.
I quick google of "dreamweaver svn" reveals that many people are working with Subversion in Dreamweaver. I'm a advocate of version control, and SVN in particular, so I would recommend you look into that :)
If you don't want to use a full on version control system (as noted above), you may be able to improve your lot by refining and automating the procedure you described originally. Depending on your comfort with the tools you should be able to put together a script in DreamWeaver itself or in Windows Scripting ( Powershell, VBA, Perl, etc ) that will at least make date-named copies of the folder you are working in every so often. This will keep you from having to do it and make sure there aren't any typo-related problems. Further down that path you can have your script put a copy of your work on a backup drive or remote server, and then you'd have a back up, too.
I'm afraid I don't know much about DreamWeaver, but if it has much scripting support built-in you may even be able to "hook" into the Save/ Auto-Save functions and have them do exactly what you want.
Hope this helps,
adricnet