How I can export information from hawt.io's dashboard to database in realtime? I want to save history of cpu load and ect, and read they from database later. May be this all wrong way and better write, using jolokia, something specific for my task?
hawtio is the visualization of the data, you better extract the data using jolokia yourself to store in the database. Jolokia makes extracting the data easier as you can use REST over HTTP as transport instead of native Java JMX.
Related
I am building a web app using nodeJS with an angular based frontend and a Firebase/AngularFire2 backend. I have a list of about 80 cities and couple of details about each of them that I need to display with checkboxes for the user.
Should I save them as a json object in a .json file on the server and call it, or just store it in my Real-time Database and query it? Are there any speed/memory benefits to either?
There are two scenarios :
1.Your task is search oriented. You have to query the data and manipulate it. Memory management is key issues for you. You want some complex searching methods on your data. Then go for the database.
2.Your task require whole data at a time. You don't need to worry about memory management. Then directly load the data from file. Obviously this method will save the connection making time with your database. It will work as simple as file streams. [suggested for your case]
I'm trying to use Google BigQuery to download a large dataset for the GitHub Data Challenge. I have designed my query and am able to run it in the console for Google BigQuery, but I am not allowed to export the data as CSV because it is too large. The recommended help tells me to save it to a table. This requires requires me to enable billing on my account and make a payment as far as I can tell.
Is there a way to save datasets as CSV (or JSON) files for export without payment?
For clarification, I do not need this data on Google's cloud and I only need to be able to download it once. No persistent storage required.
If you can enable the BigQuery API without enabling billing on your application, you can try using the getQueryResult API call. You're best bet is probably to enable billing (you probably won't be charged for the limited usage you need as you will probably stay within the free tier but if you do get charged it should only be a few cents) and save your query as a Google Storage object. If its too large I don't think you'll be able to use the Web UI effectively.
See this exact topic documentation:
https://developers.google.com/bigquery/exporting-data-from-bigquery
Summary: Use the extract operation. You can export CSV, JSON, or Avro. Exporting is free, but you need to have Google Cloud Storage activated to put the resulting files there.
use BQ command line tool
$ bq query
use the --format flag to save results as CSV.
I want to export some tables in my DB to an Excel/Spreadsheet every month.
In PHPMyAdmin there is a direct option of exporting the result of a query to the desired filetype. How do I make use of this export feature without another script to run a cronjob on a monthly basis?
Basically on a CPanel (the DB is hosted in the web) we just have to give the path to the script to be executed via a cronjob. But in PHPMyAdmin there is no such opportunity. Its an included feature of PHPMyAdmin where we generally click and do it mannually. So how do i do it in Cpanel?
Do you have ssh access to the box? Personally I'd implement this outside of phpmyadmin, as phpmyadmin is just intended for manual operations via the interface. Why not write a simple script to export the db?
Something like mysqldump database table.
Being a web-app, the export function is a POST request. In the demo application the URL is http://demo.phpmyadmin.net/STABLE/export.php, and then the post data contains all the required parameters, for example: (You can use Fiddler/Chrome dev tools too view it)
token:3162d3b849cf652c2577a45f90022df7
export_type:server
export_method:quick
quick_or_custom:custom
output_format:sendit
filename_template:#SERVER#
remember_template:on
charset_of_file:utf-8
compression:none
what:excel
codegen_structure_or_data:data
codegen_format:0
csv_separator:,
csv_enclosed:"
.....
The one tricky bit is the authentication token, but I believe this is also possible to overcome using some configuration and/or extract parameters (like the 'direct login' in http://demo.phpmyadmin.net/)
See here
How to send data using curl from Linux command line?
If you want to avoid all this, there are many other web-automation tools that can record the scenario and play it back.
just write a simple php script to connect to your database and use the answer here:How to output MySQL query results in CSV format?
I have a model called "Category" in my app in GAE.
This model simply contains a name and it's parent category, and this won't be changed frequently after the website go online.
I'd like to know what is a better way to put these model instances in the beginning?
I now only know to execute (category.put()) in a webapp.RequestHandler by issuing a http request. But I suspect there is a proper way to do this.
Thanks!
You can use the remote API to connect to your datastore in a shell and add data as required.
Or, if it's a huge amount, you could think about using the bulk loader - but I suspect that the remote API will be more suitable.
I am building a website (probably in Wordpress) which takes data from a number of different sources for display on various pages.
The sources:
A Twitter feed
A Flickr feed
A database on a remote server
A local database
From each source I will mainly retrieve
A short string, e.g. for Twitter, the Tweet, and from the local database the title of a blog page.
An associated image, if one exists
A link identifying the content at its source
My question is:
What is the best way to a) store the data and b) retrieve the data
My thinking is:
i) Write a script that is run every 2 or so minutes on a cron job
ii) the script retrieves data from all sources and stores it in the local database
iii) application code can then retrieve all data from the one source, the local database
This should make application code easier to manage - we only ever draw data from one source in application code - and that's the main appeal. But is it overkill for a relatively small site?
I would recommend putting the twitter feed and flickr feed in JavaScript. Both flickr and twitter have REST APIs. By putting it on the client you free up resources on your server, create less complexity, your users won't be waiting around for your server to fetch the data, and you can let twitter and flickr cache the data for you.
This assumes you know JavaScript. Once you get past JavaScript quirks, it's not a bad language. Give Jquery a try. JQuery Twitter plugin Flickery JQuery plugin. There are others, that's just the first results from Google.
As for your data on the local server and remote server, that will depend more on the data that is being fetched. I would go with whatever you can develop the fastest and gives acceptable results. If that means making a REST call from server to sever, then go for it. IF the remote server is slow to respond, I would go the AJAX REST API method.
And for the local database, you are going to have to write server side code for that, so I would do that inside the Wordpress "framework".
Hope that helps.