What would be the best way to store traceroute results? - database

One traceroute record may include :
Timestamp. Milliseconds resolution.
Variable number of hops.
Each hops contains, ip address, hostname and rtt.
Overall results e.g. successful, network unreachable, timed out.
Thanks.

I would use a database. You could use SQLite if you don't want to run a database server.
More details:
There is this nice little sqlite addon for firefox:
https://addons.mozilla.org/en-US/firefox/addon/sqlite-manager
That should help you set things up. I would create a field for each of the values I want to store and on for the entire result perhaps and an primary key "id" field.
Getting your data into the database would be least trivial part. If your running linux, you could write a bash shell script that captures the output of traceroute and calls a PHP shell script which inserts the data into the DB. Of course you can use Python or any language you like which supports your DB.

Related

Mongo shell : is there a way to execute the javascript code remotely instead of doing work in the local machine?

I have a MongoDB instance running on Mongo Atlas and I have a local machine.
I want to execute a script but I would like this script to be execute on the mongo instance.
I have tried several things like Robo3T or Mongo shell. And it looks like the behaviour is not the one I want.
Suppose we have this script :
print(db.users.find({}).toArray().length);
My users collection has around 30k rows. I voluntarily use toArray() to force the creation of a js array. But I want this array to be created... In the MongoDB instance or close to it ; not on the instance where I launched the mongo shell (or Robo3T).
This is obviously not my use case to count the number of users, if I really just wanted the number of users, I would have used .count() and it would have been faster. But I just want to illustrate the fact that the code is run not at the location I want it to be run.
Suppose you connect to a remote ssh. You have a very poor connection.
If you do something like
wget http://moviedatabase.com/rocky.mp4
which is a 1 To movie.
You will take the same time if your connection is blazing fast or amazingly slow : what counts is the bandwith of the server you are connecting to.
With my example, all depends on the connection of the instance you are launching Mongo shell on.
If it has a good connection, it will be faster than if it has a good connection.
What is the way to execute js code "closer" to the MongoDB instance?
How this behaviour not a problem when you administrate a MongoDB instance?
Thanks in advance,
Jerome
It depends on what you are trying to do.
There is no generic context where you can run arbitrary code, but you can store a javascript function on the server, which can then be used in $where or mapReduce.
Note that server-side javascript can be disabled with the security.javascriptEnable configuration parameter.
I would expect that Atlas disables this for it's free and shared tiers.

How to check number of commands that redis has not processed yet

I am trying to do monitoring redis database. I'm using telegraf, influxdb and grafana to monitor it. now, I want to check number and type of commands which are pending to process.
I check this page here,
Redis commands queue size. It helped alot, but I hope I can get more information, like number and type of commands as I write. is there any way to check it?
I found out that there are some commands start with 'X' at redis client https://redis.io/commands
I think this is what I want, so i'm looking at it now

Should I have to sumit jobs to spark or I can run them from client lib?

So I'm learning about Spark and I have a question about how client libs works.
My goal is to do some sort of data analysis in Spark, telling it where are the data sources (databases, cvs, etc) to process, and store results in hdfs, s3 or any kind of database like MariaDB or MongoDB.
I though about having a service (API application) that "tells" spark what I want to do. The question is: Is it enough setting the master configuration with spark:remote-host:7077 at context creation or should I send the application to spark with some sort of spark-submit command?
This completely depends on how your environment is set up, if all paths are linked to your account you should be able to run one of the two commands, to efficiently open the shell and run test commands. The reason to have a shell, is this will allow you to dynamically run commands and validate/learn how to run/tether commands onto one another and see what results come out.
Scala
spark-shell
Python
pyspark
Inside of the environment, if everything is linked to Hive tables you can check the tables by running
spark.sql("show tables").show(100,false)
The above command will run a "show tables" on the Spark-Hive-Metastore Catalogue and will return all active tables you can see (doesn't mean you can access the underlying data). The 100 means I am going to look at 100 rows and the false means to show the full string not the first N many characters.
In a mythical example if one of the tables you see is called Input_Table you can bring it into the environmrnt with the below commands
val inputDF = spark.sql("select * from Input_Table")
inputDF.count
I would heavily advise, while your learning, not to run the commands via Spark-Submit, because you will need to pass through the Class and Jar, forcing you to edit/rebuild for each testing making it difficult to figure how commands will run without have a lot of down time.

Bigcouch Clustering not Working

I am new to the bigcouch.I have successfully setup bigcouch on two different system which is working perfectly fine.
On first bigcouch i have some dbs which i want to be replicate onto other bigcouch.And i copied all the shards from first bigcouch to other one.And then i used clustering command to make them clustered.
curl -X PUT db01.yourhostname.com:5986/nodes/bigcouch#db02.yourhostname.com -d {}
its gives the success result.but when i try to create any new database it gives an internal sever error.
My first question,Is this a good way to do clustering after coping shards from one to other.I am not sure if it is a correct way to do it.so can any one tell me how to do it successfully or I am missing something.
thanks.
Check that both servers are cluster aware of each other by issuing the following on each.
curl 127.0.0.1:5984/_membership
If that looks ok maybe try ping from one to the other using the FQDN to make sure it's resolvable. Bigcouch assumes resolvable FQDN by default.
Also, i've seen this happen when you try change the FQDN. Either in the erlang nodename or the server hostname. There doesn't seem to be any coping mechanism for that.

Obtaining Raw Data from NagiosXI and/or OPSview

I am currently working on completing my Masters Thesis project. In order to do so I need to be able to obtain the raw data accumulated in NagiosXI and/or OPSview. Because both of these are based off of the Nagios core, I assume the method to obtaining the raw data may be similar. This raw data is needed so that I can at a later time perform specific statical calculations which relate to my Masters Thesis. I have looked online and so far found some Nagios plugins which obtain raw data and then manipulate it for graphs and visuals, but I need the raw numbers in order to complete my calculations.
I am also researching to see if I can create maybe a PHP script, or some other language, that will extract the data from Nagios and save it in a word or excel document. However, this would be a bit of extra work as I am unfamiliar with either PHP or MySQL queries. Because of this I hope to be able to find a plugin, or something similar, that can get the data for me.
Cyanide,
I can't speak for NagiosXI, but I can for Opsview :)
You could access the data that is stored in the RRD files. You can use rrdtool dump to pull the values out or use a URL like: /rrdfetch?start=1307608993&end=1307695393&hsm=opsview%3A%3ACheck%20Loadavg%3A%3Aload1&hsm=opsview%3A%3ACheck%20Loadavg%3A%3Aload5
And this returns back the JSON data points. This is undocumented, but is used to power the interactive javascript graphing.
Alternatively, if you have ODW enabled with full statistics, then the raw data is stored in the ODW database and you can then extract the raw data with SQL commands. See http://docs.opsview.com/doku.php?id=opsview-community:odw for more information.
Ton
You can try use mk livestatus http://mathias-kettner.de/checkmk_livestatus.html
or http://exchange.nagios.org/directory/Addons/APIs/JSON/Nagios2JSON/details
All this tools get you status data without need to go to DB or status file. While XI is based on Nagios it can still work with him.
Please take a look at http://dmytro.github.com/nagira
It's a web services API to access Nagios data. You can get all hosts, service status data, objects configuration in multiple formats JSON, XML or YAML.

Resources