I wanted to know what is the difference between:
https://plantuml.com/download (plantuml.jar)
local PlantUML server (http://localhost:8080)
I have been using a local PlantUML server to generate .png files from .puml with:
cat ./pilot.puml | curl -v -H "Content-Type: text/plain" --data-binary #- http://localhost:8080/plantuml/png/ --output - > out.png
However I need for each .puml a file and refer them in my README.md.
I know with plantuml.jar one can generate multiple .pngs from a single one but i am afraid it can upload my data to plantuml public server.
The local server and the jar do the same thing: generates image from the text based diagram.
The difference is the frontend.
The server gives you a web based UI and web based API,
the jar gives you a command line interface.
The generation happens locally in both cases.
The jar can have a much better (and more insecure) access to your disks. It can be useful if you want to generate multiple diagrams in the same directory. Or if you're include paths changes frequently (if you use !include).
Also if you use github you can try Mermaid.
And if you use gitlab or have your own git repository with more control over it you can try Asciidoc with diagrams. Both of them allows you to embed your diagrams in your readme (asciidoc even let you include external puml files).
Related
I have a python script that fetches data twice a day from a server of mine. The script returns around 40 JSON files containing various data. The files aren't particularly big and the combined size of all the files is around 250KB.
Alongside my script I am developing a dashboard in React that renders the data from each file into a table, allowing me a visual representation of the data.
I have been looking at what would be the best way to store these files, something that allows me to upload and fetch them twice a day.
Someone mentioned to me about using MongoDB to store the files, but after some research I feel like Mongo is better at storing the contents of the file rather than the file itself. I tried to develop a solution but I couldn't figure out how it could be done when each object is stored as a document with no clear way (to me) which document came from which file.
Other options I have considered are:
Storing the files on the server that is hosting my React project and rendering them locally as I am doing now during development
Storing the files using a provider such as AWS/Firebase
Storing them in a different database (I see SQL now support the storing of JSON files)
Are there any other solutions that you think would work best for this scenario? If so, why?
Hello,
Check about use of FTP server.
We have clients that send us data every 10 min via FTP that is inside XML files, then I have NodeJS back-end which read these files.
You can use it for your scenario with JSON files.
I've been writing an app to deploy on Heroku. So far everything has been working great. However, there's one last step that I haven't been able to solve: I need to generate a CSV file on-the-fly from the DB and let the user download the file.
On my local machine I can simply write the file to a folder under the web app root, e.g. /output, and then redirect my browser to http://localhost:4000/output/1.csv to get the file.
However, the same code fails again and again on Heroku no matter how I try to modify it. It either complains about not being able to write the file or not being able to redirect the browser to the correct path.
Even if I manually use heroku run bash and create an /output folder in the project root and create a file there, when I try to direct my browser there (e.g. https://myapp.herokuapp.com/output/1.csv, it simply says "Page not found".
Is it simply impossible to perform such an action on Heroku after all? I thought since we are free to create files on the ephemeral file system we should be able to access it from the browser as well, but things seem to be more complicated than that.
I'm using Phoenix framework and I already added
plug Plug.Static,
at: "/output", from: Path.expand("output/"), gzip: false
to my endpoint.ex. Apparently it works on localhost but not on Heroku?
I'll turn to Amazon S3 if indeed what I'm trying to do is impossible. However, I want to avoid using S3 as much as possible since this should be a really simple task and I don't want to add another set of credentials/extra complexity to manage. Or is there any other way to achieve what I'm trying to do without having to write the file to the file system first and then redirecting the user to it?
I know it doesn't strictly answer your question, but if you don't mind generating the CSV every time it is requested, you can use Controller.send_download/3 and serve arbitrary payload as a download (in your case the contents of the CSV).
Naturally you could store the generated CSVs somewhere (like the database or even ets) and generate them "lazily".
rpm(1) provides a -V option to verify installed files against the installation database, which can be used to detect modified or missing files.
This might be used as a form of intrusion detection (or at least part of an audit). However, it is of course possible that the rpm database installed may be modified by a hacker to hide their tracks (see http://www.sans.org/security-resources/idfaq/rpm.php, last sentence)
It looks like it should be possible to back up the rpm database /var/lib/rpm after every install (to some external medium) and to use that during an audit using --dbpath. Such a backup would have to be updated fo course after every install or upgrade etc.
Is this feasible? Are there any resources that detail methods, pitfalls, suggestions etc for this?
Yes feasible. Use "rpm -Va --dbpath /some/where/else" to point to
some saved database directory.
Copy /var/lib/rpm/Packages to the saved /some/where/else directory,
and run "rpm --rebuilddb --dbpath /some/where/else" to regenerate
the indices.
Note that you can also verify files using the original packaging
like "rpm -Vp some*.rpm" which is often less hassle (and more
secure with RO offline media storing packages) than saving copies
of the installed /var/lib/rpm/Packages rpmdb.
i have a folder with wikipedia article (XML format).
I want imported files throught the Webinterface (Special:Import). Currently i do it with imacro. But this often hangs and need a lot of resources (Memory) an can only processing one file at once.So i am looking for better solution.
I currently figured out, that in have to login to get an edittoken. This is needed to upload the file.
Read already this. get stuck
To get his run in need two wget/curl "commandlines"
to login and get the edittoken (push user and pwd to form, get edittoken)
push the file to the Formular (push edittoken and content to form)
Building the loop to processing more than one file, i can do by my own.
First of all, let's be clear: the web interface is not the right way to do this. MediaWiki installation requirements include shell access to the server, which would allow you to use importDump.php as needed for heavier imports.
Second, if you want to import a Wikipedia article from the web interface then you shouldn't be downloading the XML directly: Special:Import can do that for you. Set
$wgImportSources = array( 'wikipedia' );
or whatever (see manual), visit Special:Import, select Wikipedia from the dropdown, enter the title to import, confirm.
Third, if you want to use the commandline then why not use the MediaWiki web API, also available with plenty of clients. Most clients handle tokens for you.
Finally, if you really insist on using wget/curl over index.php, you can get a token visiting api.php?action=query&meta=tokens in your browser (check on api.php for the exact instructions for your MediaWiki version) and then do something like
curl -d "&action=submit&...#filename.xml" .../index.php?title=Special:Import
(intentionally partial code so that you don't run it without knowing what you're doing).
I am hosting a website on Heroku, and using an SQLite database with it.
The problem is that I want to be able to pull the database from the repository (mostly for backups), but whenever I commit & push changes to the repository, the database should never be altered. This is because the database on my local computer will probably have completely different (and irrelevant) data in it; it's a test database.
What's the best way to go about this? I have tried adding my database to the .gitignore file, but that results in the database being unversioned completely, disabling me to pull it when I need to.
While git (just like most other version control systems) supports tracking binary files like databases, it only does it best for text files. In other words, you should never use version control system to track constantly changing binary database files (unless they are created once and almost never change).
One popular method to still track databases in git is to track text database dumps. For example, SQLite database could be dumped into *.sql file using sqlite3 utility (subcommand .dump). However, even when using dumps, it is only appropriate to track template databases which do not change very often, and create binary database from such dumps using scripts as part of standard deployment.
you could add a pre-commit hook to your local repository, that will unstage any files that you don't want to push.
e.g. add the following to .git/hooks/pre-commit
git reset ./file/to/database.db
when working on your code (potentially modifying your database) you will at some point end up:
$ git status --porcelain
M file/to/database.db
M src/foo.cc
$ git add .
$ git commit -m "fixing BUG in foo.cc"
M file/to/database.db
.
[master 12345] fixing BUG in foo.cc
1 file changed, 1 deletion(-)
$ git status --porcelain
M file/to/database.db
so you can never accidentally commit changes made to your database.db
Is it the schema of your database you're interested in versioning? But making sure you don't version the data within it?
I'd exclude your database from git (using the .gitignore file).
If you're using an ORM and migrations (e.g. Active Record) then your schema is already tracked in your code and can be recreated.
However if you're not then you may want to take a copy of your database, then save out the create statements and version them.
Heroku don't recommend using SQLite in production, and to use their Postgres system instead. That lets you do many tasks to the remote DB.
If you want to pull the live database from Heroku the instructions for Postgres backups might be helpful.
https://devcenter.heroku.com/articles/pgbackups
https://devcenter.heroku.com/articles/heroku-postgres-import-export