How to delete all the data generated by the docker version of taosBenchmark? - tdengine

How to delete all the data generated by the docker version of taosBenchmark?
Uninstalling will keep the data, and deleting the library will directly delete the corresponding data, i want to make sure if this is right.

Related

What is the correct way to update a desktop software after editing its database?

I have a desktop app that is in use by my clients. From time to time I fix some bugs, add features, and push new updates. An update works like this (automatically):
Download new files for the app (Winrar).
Extract the files to a temp folder.
Copy the files to the app folder (overwrite the existing ones).
That's it. I don't touch the Database, which is an MS Access file in its own folder. I just update the files, not the database, because I don't want the clients to lose their data.
Now I've made some schema changes to the database (added some columns, tables...), so I have to update the database file this time as well.
How can I update these database files so the clients won't lose their data?
One option is defining the changes as a series of SQL commands such as ALTER TABLE to run against the database. Put these commands in a special file (or files) and build logic into your updater (or during the launch of your app) to detect and run them.
This has the additional benefit of allowing you to keep these changes with the version control system managing the source code for your app.

How do you specify a custom MacPorts distfiles mirror?

We have an isolated network with a mirror of MacPorts. I am trying to get a machine to properly reference this mirror for the various needs of port, and I have managed to get it to use our mirror for the base image and the packages directory (ala this post), but some packages don't have pre-built images in the packages directory so it tries to fetch the corresponding source package from distfiles. However, I haven't found any way to make it automatically use our mirror of distfiles. If I manually sync individual packages from our mirror to the local cache directory, that works, but I'm trying to make this work automatically and avoid the tedious process of syncing individual packages as needed. I'd also like to avoid having to sync the whole distfiles mirror locally.
I've tried to search for where the distfiles mirror list comes from to even try manually editing it, but I can't seem to find that, either.
Is there a proper way to do this?
If not, does anyone know what file I need to change to hack in our own URL?

What's the correct way to deal with databases in Git?

I am hosting a website on Heroku, and using an SQLite database with it.
The problem is that I want to be able to pull the database from the repository (mostly for backups), but whenever I commit & push changes to the repository, the database should never be altered. This is because the database on my local computer will probably have completely different (and irrelevant) data in it; it's a test database.
What's the best way to go about this? I have tried adding my database to the .gitignore file, but that results in the database being unversioned completely, disabling me to pull it when I need to.
While git (just like most other version control systems) supports tracking binary files like databases, it only does it best for text files. In other words, you should never use version control system to track constantly changing binary database files (unless they are created once and almost never change).
One popular method to still track databases in git is to track text database dumps. For example, SQLite database could be dumped into *.sql file using sqlite3 utility (subcommand .dump). However, even when using dumps, it is only appropriate to track template databases which do not change very often, and create binary database from such dumps using scripts as part of standard deployment.
you could add a pre-commit hook to your local repository, that will unstage any files that you don't want to push.
e.g. add the following to .git/hooks/pre-commit
git reset ./file/to/database.db
when working on your code (potentially modifying your database) you will at some point end up:
$ git status --porcelain
M file/to/database.db
M src/foo.cc
$ git add .
$ git commit -m "fixing BUG in foo.cc"
M file/to/database.db
.
[master 12345] fixing BUG in foo.cc
1 file changed, 1 deletion(-)
$ git status --porcelain
M file/to/database.db
so you can never accidentally commit changes made to your database.db
Is it the schema of your database you're interested in versioning? But making sure you don't version the data within it?
I'd exclude your database from git (using the .gitignore file).
If you're using an ORM and migrations (e.g. Active Record) then your schema is already tracked in your code and can be recreated.
However if you're not then you may want to take a copy of your database, then save out the create statements and version them.
Heroku don't recommend using SQLite in production, and to use their Postgres system instead. That lets you do many tasks to the remote DB.
If you want to pull the live database from Heroku the instructions for Postgres backups might be helpful.
https://devcenter.heroku.com/articles/pgbackups
https://devcenter.heroku.com/articles/heroku-postgres-import-export

I'm using a mongodb with a custom folder for saving data but that data is not synced with github because the files doesnt change

I use github to version my files and I would like to version my database too, in this case is only for testing purposes.
But the database files created by mongodb are not changed, the files change data is weeks ago :s therefore the github has old data..
I can't really understand why if I'm changing some data in the database the mongodb doesn't save to a file... or at least the file must have changed somehow..
MongoDB preallocates datafiles , which then get gradually filled. Perhaps that is why changes are not properly picked up.
As an aside, of all the possible ways of versioning a MongoDB database, I'm not sure that keeping the datadir itself in a Git repository is the best way to go.
Alternatives: running mongodump will result in a BSON-dump of your database or collection, while running mongoexport will result in a JSON or CSV. Both can be read back in with mongorestore and mongoimport, see documentation.
These dumps can then be versioned using you favourite tool. Personally, when using Git, I would version the JSON dump, e.g.
mongoexport --db mydatabase --collection mycollection > mycollection.json
will result in a JSON file, containing the contents of the chosen collection (you can dump the entire database if you want).
Something extra, if you append --csv and --fields fieldname1,fieldname2, you can dump a nice CSV-file, to read in with another program.

Empty my Sqlite3 database in RoR

I am working on a Ruby on Rails 3 web application using sqlite3. I have been testing my application on-the-fly creating and destroying things in the Database, sometimes through the new and edit actions and sometimes through the Rails console.
I am interested in emptying my Database totally and having only the empty tables left. How can I achieve this? I am working with a team so I am interested in two answers:
1) How do I empty the Database only by me?
2) How can I (if possible empty) it by the others (some of which are not using sqlite3 but MySql)? (we are all working on an the same project through a SVN repository)
To reset your database, you can run:
rake db:schema:load
Which will recreate your database from your schema.rb file (maintained by your migrations). This will additionally protect you from migrations that may later fail due to code changes.
Your dev database should be distinct to your environment - if you need certain data, add it to your seed.rb file. Don't share a dev database, as you'll quickly get into situations where other changes make your version incompatible.
Download sqlitebrower here http://sqlitebrowser.org/
Install it, run it, click open database (top left) to locationOfYourRailsApp/db/development.sqlite3
Then switch to Browse data tab, there you can delete or add data.
I found that by deleting the deployment.sqlite3 file from the db folder and inserting the command rake db:migrate in the command line, it solves the problem for all of my team working on sqlite3.
As far as I know there is no USER GRANT management in sqlite so it is difficult to control access.
You only can protect the database by file access.
If you want to use an empty database for test purpose.
Generate it once and copy the file somewhere.
and use a copy of this file just before testing.

Resources